Difference between revisions of "Adaptive robust optimization"
Line 63: | Line 63: | ||
<math> | <math> | ||
\begin{array}{llr} | \begin{array}{llr} | ||
− | \min\limits_x c^T x + b^T y & (1) &\\ | + | \min\limits_x c^T x + b^T y &(1) &\\ |
\ s.t &\\ | \ s.t &\\ | ||
− | \ Fx \ le f & (2) &\\ | + | \ Fx \ le f &(2) &\\ |
− | \ max_d & b^T y & (3)&\\ | + | \ max_d & b^T y &(3)&\\ |
\ s.t &\\ | \ s.t &\\ | ||
− | \ Dd \le k & (3) &\\ | + | \ Dd \le k &(3) &\\ |
− | \min\limits_y b^T y & (4) &\\ | + | \min\limits_y b^T y &(4) &\\ |
\ s.t &\\ | \ s.t &\\ | ||
− | \ Ax + By \le g & (5) &\\ | + | \ Ax + By \le g &(5) &\\ |
− | \ Hy \le h & (6) &\\ | + | \ Hy \le h &(6) &\\ |
− | \ Jy = d & (7) &\\ | + | \ Jy = d &(7) &\\ |
\end{array} | \end{array} | ||
</math> | </math> |
Revision as of 13:21, 7 June 2015
Author: Woo Soo Choe (ChE 345 Spring 2015)
Steward: Dajun Yue, Fengqi You
Contents |
Methodology
In order to investigate how Adaptive Robust Optimization problem, numerous techniques may be used. However, given the scope of this page, only three of the techniques will be introduced. The three algorithms are Bender's Decomposition, Trilevel Optimization, and column-and-Constraint Generation Algorithm and for the Benders Decomposition and Trilevel . When using Benders Decomposition approach, the algorithm essentially breaks down the original problem into the outer and inner problems. Once the problem is divided into two parts, the outer problem is solved using the Benders Decomposition and the inner problem is solved using the Outer Approximation. The detailed steps are as follows.
Benders Decomposition
The Outer Problem: Benders Decomposition
Step 1: Initialize, by denoting the lower bound as and the upper bound as
and set the iteration count as
. Then choose the termination tolerance
.
Step 2: Solve the master problem
In this case, denote the optimum solution.
Step 3: Update the lower bound
Step 4: Increase , the iteration count by 1
Step 5: Solve , the inner problem and denote the optimal solution as
. Update
, where
stands for the upper bound.
Detailed procedure of Step 5 is as follows.
if then
Go to step 2
else Calculate , the dispatch variable given
and
end
The Inner Problem : Outer Approximation
Step 1: Initialize by using the commitment decision from the outer problem from the outer problem. Then, find an initial uncertainty realization
, set the lower bound
and the upper bound
, set iteration count j=1 and then termination tolerance which is denoted as
Step 2: Solve the sub-problem below.
In the inner problem, the optimal solution is denoted as . Furthermore, define
as
. Then, update
Step 3: Solve the master problem
Increase the iteration of j by 1. While the optimal solution is denoted as
if condition is met and then use the
from the outer problem to plug into the inner problem and solve for the optimum solution until
condition is met. This method has an advantage over traditional Robust Optimization in a sense that it does not sacrifice as much optimality in the solution at the cost of obtaining a conservative answer. Unfortunately, Benders Decomposition method has three problems. First problem is the fact that the master problem relies on the dual variables of the inner and outer problems, which means that the sub-problems cannot have integer variables. Second problem is that the solution does not guarantee a global optimal solution, and it means that the algorithm may not return the absolute worst case scenario before returning the solution. Third problem is that it takes a long time to compute the answer and this might pose a problem when solving a large scale problem.
In order to resolve this issue, another algorithm called Trilevel Optimization was proposed by Bokan Chen of Iowa University. Before iterative Trilevel Optimization algorithm applied, the problem needs to be reformulated in an appropriate form as shown below.
Failed to parse(PNG conversion failed; check for correct installation of latex and dvipng (or dvips + gs + convert)): \begin{array}{llr} \min\limits_x c^T x + b^T y &(1) &\\ \ s.t &\\ \ Fx \ le f &(2) &\\ \ max_d & b^T y &(3)&\\ \ s.t &\\ \ Dd \le k &(3) &\\ \min\limits_y b^T y &(4) &\\ \ s.t &\\ \ Ax + By \le g &(5) &\\ \ Hy \le h &(6) &\\ \ Jy = d &(7) &\\ \end{array}
Equation
Model Formulation
Adaptive Robust Optimization implements different techniques to improve on the original static robust optimization by incorporating multiple stages of decision into the algorithm. Currently, in order to minimize the complexity of algorithm, most of the studies on adaptive robust optimization have focused on two-stage problems. Generally, Adaptive Robust Optimization may be formulated in various different forms but for simplicity, Adaptive Robust Optimization in convex case was provided.
In the equation is the first stage variable and
is the second stage variable, where S and Y are all the possible decisions, respectively.
represents a vector of data and when
represents uncertainty set.
In order for the provided convex case formulation to work, the case must satisfy five conditions:
1. is a nonempty convex set
2. is convex in
3. is a nonempty convex set
4. is convex in
5. For all i=1,...,n, is convex in
Clearly, not every Adaptive Robust Optimization problem may be solved using exactly one model. However, key features that need to be present in a model of Adaptive Robust Optimization are the variables which respectively represent the multiple stages, uncertainty sets whether in ellipsoidal form, polyhedral form, or other novel way, and general layout of the problem which solves for the minimum loss at the worst case scenario. Furthermore, another key feature is that second stage variables are not known. Another form of Adaptive Robust Optimization formulation is provided below.
Similarly as in the first formulation provided, and
represent the first stage variable and the second stage variable respectively. In this case the,
is the polyhedron uncertainty set of demand
and
represents the uncertainty set for the second stage variable
. In this case, H, A, B, g, J, D, and k are numerical parameters which could represent different parameters under different circumstances.
Introduction
Traditionally, robust optimization has solved problems based on static decisions which are predetermined by the decision makers. Once the decisions were made, the problem was solved and whenever a new uncertainty was realized, the uncertainty was incorporated to the original problem and the entire problem was solved again to account for the uncertainty.[1] Generally, robust optimization problem is formulated as follows.
In the equation is a vector of decision variables and
are functions and are the uncertainty parameters which take random value in the uncertainty sets Failed to parse(unknown function '\subseteqmathbb'): \mathcal{U}_i\subseteqmathbb{R}^k
. When robust optimization is utilized to solve a problem, three implicit assumptions are made.
1. All entries need in the decision vector get specific numerical values prior to the realization of the actual data.
2. When the real data is within the range of the uncertainty set , the decision maker is responsible for the result obtained through the robust optimization algorithm
3. The constraints are hard and the violation of the constraints may not be tolerated when the real data is within the uncertainty set
The three assumptions grant robust optimization technique immunity from uncertainties. There are other types of optimization techniques such as Stochastic Optimization which may be used to handle problems with uncertainties. However, because Stochastic Optimization has its own drawback because it requires the probability distribution of the events. By having the decision makers make guesses about the probability distribution, Stochastic Optimization method often yield results that are less conservative than the ones by Robust Optimization method.
Robust Optimization certainly may have advantages over other optimization methods, but unfortunately, most robust optimization problems for real life applications require multiple stages to account for uncertainties and traditional static robust has shown limitations. In order to improve the pre-existing technique, Adaptive Robust Optimization was studied and advances in the field was made to address the problems which could not be easily handled with previous methods.[2]