# Difference between revisions of "Outer-approximation (OA)"

(→Algorithm) |
|||

(24 intermediate revisions by one user not shown) | |||

Line 1: | Line 1: | ||

− | + | Authors: Xudan Sha (ChE 345 Spring 2014) | |

Steward: Dajun Yue, Fengqi You | Steward: Dajun Yue, Fengqi You | ||

Line 10: | Line 10: | ||

== Algorithm == | == Algorithm == | ||

=== Problem Statement === | === Problem Statement === | ||

− | + | A classic MINLP problem could be expressed as follows, | |

+ | |||

+ | <math>min Z=C^T y + f(x)</math> | ||

+ | |||

+ | <math>s.t.</math> <math>g(x) + By \leqslant 0</math> | ||

+ | |||

+ | <math> Ay \leqslant a</math> | ||

+ | |||

+ | <math> x \in X, y \in {0,1}^m</math> | ||

<math>f(x)</math> and <math>g(x)</math> should be convex. | <math>f(x)</math> and <math>g(x)</math> should be convex. | ||

− | === Upper | + | === Upper Bounding Subproblem === |

First, give initial values for binary variables. In the given problem, the binary variable is <math>y</math>. Fix all the <math>y</math> variables at <math>y^k</math> and solve the new non-linear problem. | First, give initial values for binary variables. In the given problem, the binary variable is <math>y</math>. Fix all the <math>y</math> variables at <math>y^k</math> and solve the new non-linear problem. | ||

− | + | <math>min Z(y^k) = C^Ty^k + f(x)</math> | |

+ | |||

+ | <math>s.t.g(x) + By^k \leqslant 0</math> | ||

+ | |||

+ | <math> x \in X</math> | ||

We can use the following NLP to check whether the former NLP is infeasible. | We can use the following NLP to check whether the former NLP is infeasible. | ||

− | + | <math>min u</math> | |

+ | |||

+ | <math>s.t. g(x) +By^k \leqslant u</math> | ||

+ | |||

+ | <math> x \in X, u \in \mathbb{R}</math> | ||

If <math>u \le 0</math>, then former NLP is feasible. If <math>u > 0</math>, then infeasible. | If <math>u \le 0</math>, then former NLP is feasible. If <math>u > 0</math>, then infeasible. | ||

− | By solving this NLP, a feasible solution is obtained. In this minimum problem, this feasible solution <math>x^k</math>is greater than the optimum solution. So we can use this solution as a upper | + | By solving this NLP, a feasible solution is obtained. In this minimum problem, this feasible solution <math>x^k</math>is greater than the optimum solution. So we can use this solution as a upper bound. Go to the master problem. |

=== Master Problem === | === Master Problem === | ||

Line 32: | Line 48: | ||

First reformulate the origin MINLP as follows: | First reformulate the origin MINLP as follows: | ||

− | + | <math>min \alpha</math> | |

− | + | <math>s.t. \alpha \ge C^T y + f(x)</math> | |

− | [[File: | + | <math>g(x) + By \leqslant 0</math> |

+ | |||

+ | <math>Ay \leqslant a</math> | ||

+ | |||

+ | <math> x \in X, y \in ({0,1})^m</math> | ||

+ | |||

+ | <math>\alpha \in \mathbb{R}</math> | ||

+ | |||

+ | Based on the solution of upper bounding problem, form a new relaxed MILP as follows: | ||

+ | |||

+ | <math>min Z=\alpha</math> | ||

+ | |||

+ | <math>s.t. \alpha \ge C^T y + f(x^k) + \triangledown {f(x^k)}^T(x-x^k)</math> | ||

+ | |||

+ | <math>g(x^k) + \triangledown {g(x^k)}^T(x-x^k) + By \leqslant 0</math> | ||

+ | |||

+ | <math>Ay \leqslant a</math> | ||

+ | |||

+ | <math> x \in X, y \in ({0,1})^m</math> | ||

+ | |||

+ | <math>\alpha \in \mathbb{R}</math> | ||

+ | |||

+ | Master problem is the relaxation of original MINLP. So we can use this solution as lower bound. | ||

+ | |||

+ | Compare the upper and lower bound. One of the following cases must occur: | ||

+ | a) If the upper and lower bound are the same, then stop and final optimal solution is found. | ||

+ | |||

+ | b) If the upper and lower bound are not the same, then update the <math>y^k</math> as new fixed value of <math>y</math>. Then start from upper bounding subproblem again to find the final optimal solution. | ||

+ | |||

+ | ===Algorithm Flow Chart === | ||

+ | The flow chart for outer-approximation is as below | ||

+ | |||

+ | [[File:A.png|450px]] | ||

+ | |||

+ | == Optimality == | ||

− | |||

To obtain a global optimum, the original MINLP should be convex, which means that all the constraints and objective function should be convex. The proposed algorithm can be applied to non-convex problems, but there is no guarantee that the solution obtained by the algorithm is a global one.<span style="font-size: 8pt; position:relative; bottom: 0.3em;">[2]</span> | To obtain a global optimum, the original MINLP should be convex, which means that all the constraints and objective function should be convex. The proposed algorithm can be applied to non-convex problems, but there is no guarantee that the solution obtained by the algorithm is a global one.<span style="font-size: 8pt; position:relative; bottom: 0.3em;">[2]</span> | ||

+ | |||

== A Numerical Example == | == A Numerical Example == | ||

− | == | + | The original mixed integer nonlinear problem is as follows: |

+ | |||

+ | <math>min f= y_1 +y_2 + {x_1}^2 + {x_2}^2</math> | ||

+ | |||

+ | <math>s.t. (x_1-2)^2 - x_2 \le 0</math> | ||

+ | |||

+ | <math>x_1-2y_1 \ge 0</math> | ||

+ | |||

+ | <math>x_1 -x_2-3(1-y_1) \le 0 </math> | ||

+ | |||

+ | <math>x_1 - (1-y_1) \ge 0</math> | ||

+ | |||

+ | <math>x_2 - y_2 \ge 0</math> | ||

+ | |||

+ | <math>x_1+x_2 \ge 3y_1</math> | ||

+ | |||

+ | <math>y_1 + y_2 \ge 1</math> | ||

+ | |||

+ | <math>0 \le x_1 \le 4, 0 \le x_2 \le 4</math> | ||

+ | |||

+ | <math>y_1, y_2 \in (0,1)</math> | ||

+ | |||

+ | Start from <math> y_1 = y_2 = 1</math>. | ||

+ | |||

+ | |||

+ | Solving the following NLP, | ||

+ | |||

+ | <math>min f= 2+ {x_1}^2 + {x_2}^2</math> | ||

+ | |||

+ | <math>s.t. (x_1-2)^2 - x_2 \le 0</math> | ||

+ | |||

+ | <math>x_1-2 \ge 0</math> | ||

+ | |||

+ | <math>x_1 -x_2 \le 0 </math> | ||

+ | |||

+ | <math>x_2 - 1\ge 0</math> | ||

+ | |||

+ | <math>x_1+x_2 \ge 3</math> | ||

+ | |||

+ | <math>0 \le x_1 \le 4, 0 \le x_2 \le 4</math> | ||

+ | |||

+ | The solution is <math>x = [2, 2]</math>. And the optimal value is <math>f^U = 10</math>. | ||

+ | |||

+ | |||

+ | Solving master program as follows: | ||

+ | |||

+ | <math>min \alpha</math> | ||

+ | |||

+ | <math>s.t. \alpha \ge y_1 +y_2 + 8 + 4(x_1-2) + 4(x_2-2)</math> | ||

+ | |||

+ | <math> - x_2 \le 0</math> | ||

+ | |||

+ | <math>x_1-2y_1 \ge 0</math> | ||

+ | |||

+ | <math>x_1 -x_2-3(1-y_1) \le 0 </math> | ||

+ | |||

+ | <math>x_1 - (1-y_1) \ge 0</math> | ||

+ | |||

+ | <math>x_2 - y_2 \ge 0</math> | ||

+ | |||

+ | <math>x_1+x_2 \ge 3y_1</math> | ||

+ | |||

+ | <math>y_1 + y_2 \ge 1</math> | ||

+ | |||

+ | <math>0 \le x_1 \le 4, 0 \le x_2 \le 4</math> | ||

+ | |||

+ | <math>y_1, y_2 \in (0,1)</math> | ||

+ | |||

+ | The solution is <math>x = [1, 1] y = [0, 1] </math>. And the optimal value is <math>f^L = 1</math>. | ||

+ | |||

+ | |||

+ | Choose <math>y = [0, 1] </math>. Insert into upper bound problem as follows: | ||

+ | |||

+ | <math>min f= 1+ {x_1}^2 + {x_2}^2</math> | ||

+ | |||

+ | <math>s.t. (x_1-2)^2 - x_2 \le 0</math> | ||

+ | |||

+ | <math>x_1 \ge 0</math> | ||

+ | |||

+ | <math>x_1 -x_2 -3\le 0 </math> | ||

+ | |||

+ | <math>x_1 - 1\ge 0</math> | ||

+ | |||

+ | <math>x_2 - 1\ge 0</math> | ||

+ | |||

+ | <math>x_1+x_2 \ge 0</math> | ||

+ | |||

+ | <math>0 \le x_1 \le 4, 0 \le x_2 \le 4</math> | ||

+ | |||

+ | The solution is <math>x = [1, 1]</math>. <math>f^U = 3</math>. | ||

+ | |||

+ | |||

+ | Solving master program as follows: | ||

+ | |||

+ | <math>min \alpha</math> | ||

+ | |||

+ | <math>s.t. \alpha \ge y_1 +y_2 + 2 + 2(x_1-1) + 2(x_2-1)</math> | ||

+ | |||

+ | <math> - 2x_1 - x_2 +3 \le 0</math> | ||

+ | |||

+ | <math>x_1-2y_1 \ge 0</math> | ||

+ | |||

+ | <math>x_1 -x_2-3(1-y_1) \le 0 </math> | ||

+ | |||

+ | <math>x_1 - (1-y_1) \ge 0</math> | ||

+ | |||

+ | <math>x_2 - y_2 \ge 0</math> | ||

+ | |||

+ | <math>x_1+x_2 \ge 3y_1</math> | ||

+ | |||

+ | <math>y_1 + y_2 \ge 1</math> | ||

+ | |||

+ | <math>0 \le x_1 \le 4, 0 \le x_2 \le 4</math> | ||

+ | |||

+ | <math>y_1, y_2 \in (0,1)</math> | ||

+ | |||

+ | The solution is <math>x = [1, 1] y = [0, 1] </math>. And the optimal value is <math>f^L = 3</math>. | ||

+ | Upper bound is the same with lower bound. Optimal solution is found. | ||

== Reference == | == Reference == |

## Latest revision as of 12:26, 12 June 2014

Authors: Xudan Sha (ChE 345 Spring 2014) Steward: Dajun Yue, Fengqi You

## Contents |

## General

Outer approximation is a basic approach for solving Mixed Integer Nonlinear Programming (MINLP) models suggested by Duran and Grossmann (1986) [1]. Based on principles of decomposition, outer-approximation and relaxation, the proposed algorithm effectively exploits the structure of the original problems. The new problems consist of solving an alternating finite sequence of nonlinear programming subproblems and relaxed versions of a mixed-integer linear master program.

## Algorithm

### Problem Statement

A classic MINLP problem could be expressed as follows,

and should be convex.

### Upper Bounding Subproblem

First, give initial values for binary variables. In the given problem, the binary variable is . Fix all the variables at and solve the new non-linear problem.

We can use the following NLP to check whether the former NLP is infeasible.

If , then former NLP is feasible. If , then infeasible.

By solving this NLP, a feasible solution is obtained. In this minimum problem, this feasible solution is greater than the optimum solution. So we can use this solution as a upper bound. Go to the master problem.

### Master Problem

The main idea of using outer approximation is to develop equivalent linear representation of MINLP and apply relaxation. All the functions in constraints and objective should be convex and differentiable.

First reformulate the origin MINLP as follows:

Based on the solution of upper bounding problem, form a new relaxed MILP as follows:

Master problem is the relaxation of original MINLP. So we can use this solution as lower bound.

Compare the upper and lower bound. One of the following cases must occur: a) If the upper and lower bound are the same, then stop and final optimal solution is found.

b) If the upper and lower bound are not the same, then update the as new fixed value of . Then start from upper bounding subproblem again to find the final optimal solution.

### Algorithm Flow Chart

The flow chart for outer-approximation is as below

## Optimality

To obtain a global optimum, the original MINLP should be convex, which means that all the constraints and objective function should be convex. The proposed algorithm can be applied to non-convex problems, but there is no guarantee that the solution obtained by the algorithm is a global one.[2]

## A Numerical Example

The original mixed integer nonlinear problem is as follows:

Start from .

Solving the following NLP,

The solution is . And the optimal value is .

Solving master program as follows:

The solution is . And the optimal value is .

Choose . Insert into upper bound problem as follows:

The solution is . .

Solving master program as follows:

The solution is . And the optimal value is . Upper bound is the same with lower bound. Optimal solution is found.

## Reference

[1] Duran M A, Grossmann I E. An outer-approximation algorithm for a class of mixed-integer nonlinear programs[J]. Mathematical programming, 1986, 36(3): 307-339.

[2] Fletcher R, Leyffer S. Solving mixed integer nonlinear programs by outer approximation[J]. Mathematical programming, 1994, 66(1-3): 327-349.

[3] Varvarezos D K, Grossmann I E, Biegler L T. An outer-approximation method for multiperiod design optimization[J]. Industrial & engineering chemistry research, 1992, 31(6): 1466-1477.

[4] Bisschop J, Roelofs M. Aimms-Language Reference[M]. Lulu. com, 2006. p377 -387