Adaptive robust optimization

From optimization
Revision as of 17:05, 30 May 2015 by Wchoe418 (Talk | contribs)

Jump to: navigation, search

Author: Woo Soo Choe (ChE 345 Spring 2015)
Steward: Dajun Yue, Fengqi You


Contents

Introduction

Traditionally, robust optimization has solved problems based on static decisions which are predetermined by the decision makers. Once the decisions were made, the problem was solved and whenever a new uncertainty was realized, the uncertainty was incorporated to the original problem and the entire problem was solved again to account for the uncertainty.[1] Generally, robust optimization problem is formulated as follows.  minimize f_o(x)


Failed to parse(syntax error): subject to f_i(x,u_i)\le0 & \forall u_i &isin mathit{u}_i, i=1,...,m

Unfortunately, most robust optimization problems for real life applications require multiple stages to account for uncertainties and traditional static robust has shown limitations. In order to improve the pre-existing technique, adaptive robust optimization was studied and advances in the field was made to address the problems which could not be easily handled with previous methods.[2]

Adaptive robust optimization implements different techniques to improve on the original static robust optimization by incorporating multiple stages of decision into the algorithm. Currently, in order to minimize the complexity of algorithm, most of the studies on adaptive robust optimization have focused on two-stage problems. Generally, adaptive robust optimization has the basic set up as follows.

Wsaroeq1.png

In the preceding expression, the p's are the parameters of the objective function and constraints. A noteworthy feature of the parameters is that they have aspects of ambiguity in them[2]. The vector y is the first stage decision made prior to the realization of ambiguity. First stage decisions are based on a finite number of set of possible scenarios with unknown probability distribution. Furthermore, first stage decisions are chosen in such a way that no assumption is required for the decision parameter. This reduces the need for the decision makers to come up with any ambiguous parameters. Such use of first-stage decision variables is called robust decision setting. The vector x represents the second stage decision variable after the amibguity is realized. The overall algorithm sequentially solves and updates a relaxation problem unitl both feasibility and optimum conditions of the overall problems are met. The method also makes use of enumerative search in order to reach the answer. Adaptive robust maybe used make a minmax regret decision. In other words, when a firm needs to make a decision involving risk and uncertainty, it may utilize adaptive robust optimization to take the path that leads to minimum loss when the worst case scenario takes place.

Background

In the early 1970's, Soyster proposed an adaptive robust optimization method which utilizes linear optimization model to come up with a solution that work any value from an interval[3]. This method was able to account for the uncertainty and yield an answer that met the specification of the algorithm. This method often yielded overly-conservative result and Ben-Tal, Nemirovski and others resolved the issue by representing the uncertainty sets as ellipsoids and devising an algorithm that solves convex optimization problems when the data has uncertainty[4]. Even with the improvements, these techniques have drawbacks in regards to real-life application, because these approaches involve conic quadratic problems, these methods cannot be easily implemented to discrete cases. Given vast variety of algorithm in the field of adaptive robust optimization, only scenario relaxation algorithm for finite scenario based min-max regret will be studied in this wiki page.


Methodology

In order to carry out adaptive robust optimization using scenario based relaxation algorithm, the number of the possible scenarios is reduced based on two criteria. Under the first criterion, the number of scenarios are reduced based on whether the solution is feasible for all possible scenarios. Under the second criterion, the number of the scenarios is reduced so that the scenarios would result in the optimal robust solution. The algorithm works in a way that the first subset of possible scenarios are used to solve the following equation[2].

Wsaroeq2.png
Wsaroeq3.png

Then, the second subset of possible scenarios are used to calculate the regrets at each iteration for all scenarios which were previously ruled out in the possible scenario categorization. If the optimal objective value obtained from running the first subset is greater than or equal to the calculated regret from optimality, the optimal condition is met and the algorithm terminates. If not, previously not included scenarios would be included and then the iteration will repeat until the optimum solution is found.


\begin{bmatrix} \nabla f + \lambda \nabla h + \mu \nabla g^* \\ h \\ g^* \end{bmatrix} =0












Example

As an example problem which implements the previously covered methodology a warehouse and resource application problem is used[2].

Wsexamplediagram.png

Also, the data of the problem are given as such[2].

Wsexampledata.png

Furthermore, the variables of the problem are defined as follows[2]

Wsexamplevariable.png

In this example, the problem is seen in multiple different approaches. In general, the location of the factories and the warehouses and the maximum capacities are the first-stage decision variables because the first-stage decisions must be made before the uncertainties are taken into account.

Results

When the example problem is solved using the algorithm posed in methodology section yields the following result[2]. [[File:

Wsexampleresult1.png
Wsexampleresult2.png


Conclusion

Consequently, the studied algorithm may not always guarantee the reduction of computing power or computing time, but by reducing the number of possible solutions to run, it provides a way for an adaptive robust optimization to cope with a highly complicated problem. Also, this approach has a relevance to the technique's learned in optimization classes. By controlling the scope of the different scenarios to try, further advance in adaptive robust optimization may be possible by making improvement in the already computationally taxing nature of adaptive robust optimization.


\nabla L =\begin{bmatrix} \frac{dL}{dx} \\ \frac{dL}{d\lambda} \\ \frac{dL}{d\mu} \end{bmatrix} =


References

1. Virginie Gabrel, Recent Advance in Robust Optimization: An Overview, Universite Paris Dauphine, 2013

2. Tiravat Assavapokee, Scenario Relaxation Algorithm for Finite Scenario-Based min–max Regret and Min–Max Relative Regret Robust Optimization

3. Kouvelis P, Yu G, Robust Discrete Optimization and Its Application, Dordrecht, 1997

4. Ben-Tal A, Nemirovski A, Robust Convex Optimization Mathematical Methods of Operations Research, 1998