Adaptive robust optimization

From optimization
Revision as of 23:35, 6 June 2015 by Wchoe418 (Talk | contribs)

Jump to: navigation, search

Author: Woo Soo Choe (ChE 345 Spring 2015)
Steward: Dajun Yue, Fengqi You

Contents

Model Formulation

Adaptive Robust Optimization implements different techniques to improve on the original static robust optimization by incorporating multiple stages of decision into the algorithm. Currently, in order to minimize the complexity of algorithm, most of the studies on adaptive robust optimization have focused on two-stage problems. Generally, Adaptive Robust Optimization may be formulated in various different forms but for simplicity, Adaptive Robust Optimization in convex case was provided.

\begin{array}{llr}
\max\limits_{x\in \mathit{S}}   &f(x) + \max\limits_{b\in \mathit{B}} Q(x,b) &\\
\text{when} & Q(x,b)    = \min\limits_{y \in \mathit{Y}} h(y)    b &\\
\text{s.t} & H(x,y,b)\le0	
\end{array}


Furthermore, In the equation x is the first stage variable and y is the second stage variable, where S and Y are all the possible decisions, respectively.b represents a vector of data and when \mathit{B} represents uncertainty set.

In order for the provided convex case formulation to work, the case must satisfy five conditions:
1.  \mathit{S} is a nonempty convex set
2. f(x) is convex in x
3. \mathit{Y} is a nonempty convex set
4. h(y) is convex in y
5. For all i=1,...,n, H_i (x,y,b) is convex in (x,y), \forall b \in \mathit{B}

Clearly, not every Adaptive Robust Optimization problem may be treated as convex cases. While the exact formulation for convex cases may not be used ubiquitously to other cases of Adaptive Robust Optimization problems, the overarching theme in other possible variations of the provided formulation is that an uncertainty set is needed to solve the for solution which yields the minimum value in the worst possible case scenario.

Methodology

In order to obtain a solution for the formulation provided in the Model Formulation section, three different approaches may be used. The first approach is Kelley's Algorithm runs the formulation, maintaining a polyhedral outer approximation of the feasible region.

Kelly's Algorithm
Initialize: Set f_o(x) to be an initial piece-wise lower approximation of f(x) by setting L=-\infty and U=\infty; t=0
Iterate: After initialization, the following steps are followed to arrive at the optimum answer, given f_t(x_t), L and U.
Step 1: Solve min_x f_t(x). In this case, L=f_t(x_t)
'Step 2: Compute f(x_t) and have U=min{U,f(x_t)}</math>



Introduction

Traditionally, robust optimization has solved problems based on static decisions which are predetermined by the decision makers. Once the decisions were made, the problem was solved and whenever a new uncertainty was realized, the uncertainty was incorporated to the original problem and the entire problem was solved again to account for the uncertainty.[1] Generally, robust optimization problem is formulated as follows.

Wsrobustoptimizationbasic.png

In the equation x\epsilon\mathbb{R}^n is a vector of decision variables and f_o,f_iare functions and are the uncertainty parameters which take random value in the uncertainty sets Failed to parse(unknown function '\subseteqmathbb'): \mathcal{U}_i\subseteqmathbb{R}^k . When robust optimization is utilized to solve a problem, three implicit assumptions are made.
1. All entries need in the decision vectorx get specific numerical values prior to the realization of the actual data.
2. When the real data is within the range of the uncertainty set \mathcal{U}, the decision maker is responsible for the result obtained through the robust optimization algorithm
3. The constraints are hard and the violation of the constraints may not be tolerated when the real data is within the uncertainty set mathcal{U}
The three assumptions grant robust optimization technique immunity from uncertainties. There are other types of optimization techniques such as Stochastic Optimization which may be used to handle problems with uncertainties. However, because Stochastic Optimization has its own drawback because it requires the probability distribution of the events. By having the decision makers make guesses about the probability distribution, Stochastic Optimization method often yield results that are less conservative than the ones by Robust Optimization method.
Robust Optimization certainly may have advantages over other optimization methods, but unfortunately, most robust optimization problems for real life applications require multiple stages to account for uncertainties and traditional static robust has shown limitations. In order to improve the pre-existing technique, Adaptive Robust Optimization was studied and advances in the field was made to address the problems which could not be easily handled with previous methods.[2]


Example

Application

Conclusion