Difference between revisions of "Lagrangean duality"

From optimization
Jump to: navigation, search
(General Lagrange Dual Problem)
(General Lagrange Dual Problem)
Line 16: Line 16:
 
The Lagrangian is given by: <math> L(x,\lambda,\nu)= f_0 + \sum_{i=1}^m \lambda_i f_i + \sum_{i=1}^p \nu_i h_i  </math> <br/>
 
The Lagrangian is given by: <math> L(x,\lambda,\nu)= f_0 + \sum_{i=1}^m \lambda_i f_i + \sum_{i=1}^p \nu_i h_i  </math> <br/>
  
where <math> \lambda_i </math> are Lagrange multipliers associated with <math> f_i (x) </math>, and <math> \nu_i </math> are Lagrange multipliers associated with h_i(x).<br/>
+
where <math> \lambda_i </math> are Lagrange multipliers associated with <math> f_i (x) </math>, and <math> \nu_i </math> are Lagrange multipliers associated with <math> h_i(x)</math>.<br/>
  
Then the Lagrange dual function is <math> g(\lambda, \nu) </math> <math> = \text{inf} L(x, \lambda, \nu) </math>. <br/>
+
Then the Lagrange dual function is <math> g(\lambda, \nu) = \text{inf} L(x, \lambda, \nu) </math>. <br/>
  
 
The Lagrange dual problem is: <br/>
 
The Lagrange dual problem is: <br/>
 
<math> \text{maximize} g(\lambda, \nu) </math> <br/>
 
<math> \text{maximize} g(\lambda, \nu) </math> <br/>
 
<math>\text{s.t.}\ \lambda \ge 0 </math>. <br/>
 
<math>\text{s.t.}\ \lambda \ge 0 </math>. <br/>
 +
 +
<math>
 +
\begin{align}
 +
\text{maximize} & ~~ g(\lambda, \nu)\\
 +
\text{s.t} & ~~ \lambda \ge 0 \\
 +
\end{align}
 +
</math>
  
 
==Theorems==
 
==Theorems==

Revision as of 12:17, 7 June 2015

Author: Hannah Seo (ChE 345, Spring 2015)
Steward: Dajun Yue, Fengqi You

Lagrangian duality theory refers to a way to find a bound or solve an optimization problem (the primal problem) by looking at a different optimization problem (the dual problem). More specifically, the solution to the dual problem can provide the lower bound to the primal problem or provide the same optimal solution as the primal problem, which is useful if the primal problem is harder to solve than the dual problem.

Contents

General Lagrange Dual Problem

Consider the following optimization problem in standard form:

\begin{align}
\text{minimize} & ~~ f_0 (x)\\
\text{s.t} & ~~ f_i (x)\le 0 \quad i = 1, 2, ..., m \\
& ~~ h_i(x)\le 0 \quad i=1, 2,...,p \\ 
\end{align}

The Lagrangian is given by:  L(x,\lambda,\nu)= f_0 + \sum_{i=1}^m \lambda_i f_i + \sum_{i=1}^p \nu_i h_i

where  \lambda_i are Lagrange multipliers associated with  f_i (x) , and  \nu_i are Lagrange multipliers associated with  h_i(x).

Then the Lagrange dual function is  g(\lambda, \nu) = \text{inf} L(x, \lambda, \nu) .

The Lagrange dual problem is:
 \text{maximize} g(\lambda, \nu)
\text{s.t.}\ \lambda \ge 0 .


\begin{align}
\text{maximize} & ~~ g(\lambda, \nu)\\
\text{s.t} & ~~ \lambda \ge 0 \\
\end{align}

Theorems

Weak and Strong Duality Theorem

The weak duality theorem says that for the general problem, the optimal value of the Lagrange dual problem ( d^*) and the optimal value of the primal problem ( p^*) are related by:
 d^* \le p^*

This means that the dual problem provides the lower bound for the primal problem. The difference between the two optimal values is called the optimal duality gap.

The strong duality theorem says that for convex problems that satisfy certain conditions, the optimal duality gap is zero, meaning that the optimal values of the primal and dual problems are the same. For convex problems to guarantee the strong duality condition, Slater's constraint qualifications must be met, meaning that a convex problem must be strictly feasible.

Complementary Slackness

Assuming that strong duality holds,  x^* is the optimal solution of the primal problem, and  (\lambda^*, \nu^*) are the optimal values of the dual problem, then
 \lambda_i^* f_i (x^*) = 0  \text{for}\ i = 1,...,m

Special Case: Linear Programming Problem

For linear programming problems, the construction of the dual problem is much simpler. The feasible solutions for the primal problem yield bounds on the optimal objective function value on the dual problem and vice versa.

Given the primal problem:
 \text{maximize}  \sum_{j=1}^n c_j  x_j
 \text{s.t.}  \sum_{j=1}^n  a_{ij}x_j \le b_i  i = 1, 2, ..., m
 x_j \ge 0  j = 1, 2, ...,n

the corresponding dual problem is:
 \text{minimize}  \sum_{i=1}^m b_i  y_i
 \text{s.t.}  \sum_{j=1}^m  y_i a_{ij} \ge c_j  j = 1, 2, ..., n
 y_i \ge 0  i = 1, 2, ...,m

Applications

Real world applications that utilize Lagrangean duality theory include resource allocation problems, models of electrical networks, and models of economic markets.

In the resource allocation problem, the primal problem is the problem as seen by the production manager, while the dual problem is the problem as seen by the supplier. Both problems are linear programming problems, so the strong duality theorem applies. Thus, using the solutions to the two problems, the production levels and prices can be set to establish equilibrium.

For models of electrical networks, the current flows can be represented as the primal variables and the voltage differences can be seen as the dual variables. This representation allows for the optimization of electrical networks.

For models of economic markets, the production and consumption levels are the primal variables and the prices of goods and services can be seen as the dual variables.

Conclusion

Utilization of the principle of Lagrangean duality can be useful for problems that are hard to solve. By looking at the problem from a different perspective by transforming the original problem, the resulting problem may give a solution that is the bound to the original problem or is the same as the solution to the original problem.

References

1. Boyd, S.; Vandenberghe, L. Convex Optimization, 1st ed.; Cambridge University Press: New York, 2004.

2. Chiang, M. ELE539A: Optimization of Communication Systems Lecture 2: Convex Optimization and Lagrange Duality. Princeton University, Princeton, NJ, 2007.

3. Freund, R. Applied Lagrange Duality for Constrained Optimization. Massachusetts Institute of Technology, Cambridge, MA, 2004.

4. Ghaoui, L. EE 227A: Convex Optimization and Applciations Lecture 7: Weak Duality. University of California, Berkeley, Berkeley, CA, 2012.

5. Vanderbei, R. Linear Programming, 4th ed.; Springer: New York, 2014.