Difference between revisions of "Adaptive robust optimization"

From optimization
Jump to: navigation, search
Line 19: Line 19:
 
In the equation, <math>x_2(u)</math> is an arbitrary function of <math>u</math> where <math>u</math> represents the uncertainty. When the expression is rewritten in terms of the feasible set for the first-stage decision, the following expression is obtained.
 
In the equation, <math>x_2(u)</math> is an arbitrary function of <math>u</math> where <math>u</math> represents the uncertainty. When the expression is rewritten in terms of the feasible set for the first-stage decision, the following expression is obtained.
 
[[File:wsarcstep2.png|center]]<br>
 
[[File:wsarcstep2.png|center]]<br>
In the expression, the set is convex but the problem is intractable. In order to resolve the issue , Adaptive Robust Optimization may utilize different techniques to allow the question to be solvable. In "Methodology" section, three different approaches of Adaptive Robust Optimization are explored. When solving Adaptive Robust Optimization problems, three different approaches may be implemented. First method is
+
In the expression, the set is convex but the problem is intractable. In order to resolve the issue , Adaptive Robust Optimization may utilize different techniques to allow the question to be solvable. In "Methodology" section, three different approaches of Adaptive Robust Optimization are explored. When solving Adaptive Robust Optimization problems, three different approaches may be implemented.<br>
 +
 
 +
The first approach is called Receding Horizon method. In this method,  <br>
 +
 
 +
The second approach is Stochastic Optimization. Even though Stochastic Optimization method itself is slightly different from Robust Optimization, the principles used in Stochastic Optimization maybe used to improve the pre-existing single-staged Robust Optimization.
 +
 
 +
The third approach is Dynamic Programming. Dynamic programming
 +
 
 +
==Example==
 +
 
 +
==Application==
 +
 
 +
==Summary==

Revision as of 01:18, 31 May 2015

Author: Woo Soo Choe (ChE 345 Spring 2015)
Steward: Dajun Yue, Fengqi You


Contents

Introduction

Traditionally, robust optimization has solved problems based on static decisions which are predetermined by the decision makers. Once the decisions were made, the problem was solved and whenever a new uncertainty was realized, the uncertainty was incorporated to the original problem and the entire problem was solved again to account for the uncertainty.[1] Generally, robust optimization problem is formulated as follows.

Wsrobustoptimizationbasic.png

In the equation x\epsilon\mathbb{R}^n is a vector of decision variables and f_o,f_iare functions and u_i\epsilon\mathbb{R}^k are the uncertainty parameters which take random value in the uncertainty sets Failed to parse(unknown function '\subseteqmathbb'): \mathcal{U}_i\subseteqmathbb{R}^k . When robust optimization is utilized to solve a problem, three implicit assumptions are made.
1. All entries need in the decision vectorx get specific numerical values prior to the realization of the actual data.
2. When the real data is within the range of the uncertainty set \mathcal{U}, the decision maker is responsible for the result obtained through the robust optimization algorithm
3. The constraints are hard and the violation of the constraints may not be tolerated when the real data is within the uncertainty set mathcal{U}
The three assumptions grant robust optimization technique immunity from uncertainties. There are other types of optimization techniques such as Stochastic Optimization which may be used to handle problems with uncertainties. However, because Stochastic Optimization has its own drawback because it requires the probability distribution of the events. By having the decision makers make guesses about the probability distribution, Stochastic Optimization method often yield results that are less conservative than the ones by Robust Optimization method.
Robust Optimization certainly may have advantages over other optimization methods, but unfortunately, most robust optimization problems for real life applications require multiple stages to account for uncertainties and traditional static robust has shown limitations. In order to improve the pre-existing technique, Adaptive Robust Optimization was studied and advances in the field was made to address the problems which could not be easily handled with previous methods.[2]


Methodology

Adaptive Robust Optimization implements different techniques to improve on the original static robust optimization by incorporating multiple stages of decision into the algorithm. Currently, in order to minimize the complexity of algorithm, most of the studies on adaptive robust optimization have focused on two-stage problems. Generally, Adaptive Robust Optimization has the following basic set up.

Wsarcstep1.png

In the equation, x_2(u) is an arbitrary function of u where u represents the uncertainty. When the expression is rewritten in terms of the feasible set for the first-stage decision, the following expression is obtained.

Wsarcstep2.png

In the expression, the set is convex but the problem is intractable. In order to resolve the issue , Adaptive Robust Optimization may utilize different techniques to allow the question to be solvable. In "Methodology" section, three different approaches of Adaptive Robust Optimization are explored. When solving Adaptive Robust Optimization problems, three different approaches may be implemented.

The first approach is called Receding Horizon method. In this method,

The second approach is Stochastic Optimization. Even though Stochastic Optimization method itself is slightly different from Robust Optimization, the principles used in Stochastic Optimization maybe used to improve the pre-existing single-staged Robust Optimization.

The third approach is Dynamic Programming. Dynamic programming

Example

Application

Summary