Difference between revisions of "Exponential transformation"

From optimization
Jump to: navigation, search
Line 84: Line 84:
 
===Historical Use===
 
===Historical Use===
 
Give some examples here.
 
Give some examples here.
==Limitations==
+
==Examples==
Discuss limitations here
+
 
 +
Convexification of a few objective functions and constraints are provided in this section as illustrative examples. The resulting reformulations can be tested in GAMS using appropriate solvers to verify that there is indeed a globally optimal solution.
 +
 
 +
'''Example 1'''
 +
 
 +
Original problem:
 +
                  <math> \min f(t_1,t_2)=14t_1^{3.2}t_2^{4}+t_1^2t_2^7 </math>
 +
   
 +
            s.t. <math> t_1 \ge 0 </math>
 +
                  <math> t_2 \ge 0 </math>
 +
 
 +
 
 +
After exponential transformation, this problem becomes:
 +
 
 +
                  <math> \min f(z_1, z_2)=14e^ \left (3.2z_1+4z_2 \right ) + e^ \left ( 2z_1+7z_2 \right ) </math>
 +
 
 +
            s.t. <math> 
 +
 
 
==Conclusions==
 
==Conclusions==
 
Content under conclusions
 
Content under conclusions

Revision as of 14:37, 25 May 2014

Author: Daniel Garcia (ChBE 345)

Stewards: Dajun Yue and Prof. Fengqi You

Date presented: May 25, 2014

This article concerns the exponential transformation method for globally solving posynomial (or general geometric/signomial) optimization problems with nonconvex objective functions or constraints. A discussion of the method's development, use, and limitations will be presented.

Contents

Background

Before discussing methods to solve posynomial optimization problems, a brief review of posynomials is of use. A posynomial, as defined by Duffin, Peterson, and Zener (1967) as a function of the form

 f(x_1,x_2,...,x_n)=\sum_{k=1}^K c_kx_1^{a_{1k}}...x_n^{a_{nk}}

where the variables  x_i and the coefficients  c_k are positive, real numbers, and all of the exponents  a_{ik} are real numbers. For example,

 f(x_1,x_2)=14x_1^{3.2}x_2^{4}+x_1^2x_2^7

is a posynomial in two variables, and

 f(x_1)=1500x_1^{3/5}

is a posynomial with one variable.

It is not difficult to imagine a posynomial that is nonconvex, such as the examples above. Unfortunately, this can cause some problems when attempting to find a globally optimal solution of a posynomial program, as it is known that only convex problems guarantee a globally optimal solution.

Posynomial or geometric programming has been applied to solve problems in varied fields, such as signal circuit design, engineering design, project management, and inventory management, just to name a few. Clearly, the solution of such problems are important to the chemical engineer, and being able to globally solve such problems will equip the engineer with a powerful tool to solve a myriad of problems.

Development

Many researchers attempted to solve such problems starting in the 1960's and 1970's. Methods used in the day aimed to find only locally optimal solutions, and employed methods such as successive approximation of posynomials (called "condensation"), so-called "psuedo-duality" methods which use a weaker form of duality, and adapted nonlinear programming methods. While locally optimal solutions are certainly better than no solution at all, the desire to find a globally optimal solution was strong enough to spur the development of other methods for posynomial programs in the 1990's. Such methods included global optimization algorithms based on exponential variable transformations of the original posynomial program, convex relaxation of the original problem, and branch-and-bound-type methods.

The exponential transformation method was introduced by Costas Maranas and Christodoulos Floudas in 1997, and, under certain circumstances discussed in the limitation section, provided a method to globally optimize certain posynomial programming problems.

Formulation

A posynomial program may be written as a generalized geometric problem (GGP) as so:


 \min G_0(t)=G_0^+(t)-G_0^-(t)

subject to:

 G_j(t)=G_j^+(t)-G_j^-(t) \le 0, j=1,...,M

 t_i \ge 0, i=1,...,N

Where:

 G_j^+(t)=\sum_{k \in K_j^+} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M

 G_j^-(t)=\sum_{k \in K_j^-} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M

where  t=(t_1,...,t_N) is the positive variable vector;  G_j^+, G_j^-, j=0,...,M are positive posynomial functions in t;  \alpha_{ijk} are arbitrary real constant exponents; and, finally,  c_{jk} are strictly positive coefficients. Sets  K_j^+, K_j^- keep track of how many positively or negatively signed monomials form posynomials  G_j^+, G_j^- , respectively. This formulation is constructed by grouping together monomials with identical signs.

One can simply apply the transformation  t_i=e^{z_i}, i=1,...,N to obtain the transformed optimization problem:


 \min G_0(z)=G_0^+(z)-G_0^-(z)


subject to  G_j(z)=G_j^+(z)=G_j^-(z) \le 0, j=1, ...,M


 z_i^L \le z_i \le z_i^U, i=1,...,N


Where:


 G_j^+(z)=\sum_{k \in K_j^+} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M


 G_j^-(z)=\sum_{k \in K_j^-} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M


Note that this reformulation results in the difference between to convex functions. Thus, the reformulation has, in theory, "convexified" the original nonconvex problem. However, since  z_i^L= \ln t_i^L , it is necessary that the lower bound  t_i^L be strictly positive for this reformulation to exist. Maranas and Floudas help circumvent this issue by essentially pre-scaling the original variables to ensure that their lower bounds will be positive. Thus, make sure that for each t_i:


 t_i^'=t_i+\max \left ( 0, -t_i^L+\epsilon \right ) , \epsilon > 0.


With this pre-scaling in place, it is evident that the reformulated problem is a convex programming problem, and a global solution can be found.

Historical Use

Give some examples here.

Examples

Convexification of a few objective functions and constraints are provided in this section as illustrative examples. The resulting reformulations can be tested in GAMS using appropriate solvers to verify that there is indeed a globally optimal solution.

Example 1

Original problem:

                  \min f(t_1,t_2)=14t_1^{3.2}t_2^{4}+t_1^2t_2^7 
   
            s.t.  t_1 \ge 0 
                  t_2 \ge 0 


After exponential transformation, this problem becomes:

                  \min f(z_1, z_2)=14e^ \left (3.2z_1+4z_2 \right ) + e^ \left ( 2z_1+7z_2 \right ) 
            s.t. Failed to parse(syntax error):     ==Conclusions== Content under conclusions  ==References== 1. T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, ''Optimization of Chemical Processes'', McGraw-Hill, 2001.  2. J. Nocedal, S. J. Wright, ''Numerical Optimization'', Springer, 2006.  3. J.F. Tsai, M.H. Lin, ''An Efficient Global Approach for Posynomial Geometric Programming Problems'', INFORMS Journal on Computing, 23 (3) pp. 483-492, 2011.  4. C. D. Maranas, C. A. Floudas, ''Global Optimization in Generalized Geometric Programming'', Computers & Chemical Engineering, 21 (4) pp. 351-369, 1997.