Difference between revisions of "Lagrangean duality"

Author: Hannah Seo (ChE 345, Spring 2015)
Steward: Dajun Yue, Fengqi You

Lagrangian duality theory refers to a way to find a bound or solve an optimization problem (the primal problem) by looking at a different optimization problem (the dual problem). More specifically, the solution to the dual problem can provide a bound to the primal problem or the same optimal solution as the primal problem, which is useful if the primal problem is harder to solve than the dual problem. In the latter case, the problem must be convex and satisfy a constraint qualification. A fundamental idea of the duality theory is that the dual of a dual linear program is the original primal linear program.

History

John von Neumann is said to have first proposed the duality theorem after hearing George Dantzig present the linear programming problem. In 1946, Dantzig took a job as the Mathematical Adviser to the U.S. Air Force Comptroller. During his tenure, he sought to generalize Wassily Leontief's 1932 Interindustry Input-Output Model of the American Economy. In doing so, he formulated a model to solve a planning problem of assigning 70 men to 70 jobs. In seeking solution techniques, Dantzig visited von Neumann on October 3, 1947 at the Institute for Advanced Study at Princeton University. After being prompted by an impatient von Neumann to get to the point, Dantzig outlined the problem in one minute. After just one minute of explanation, von Neumann noted that there are two problems that are equivalent and that his work on game theory was an analog to this duality theory. In 1948, Albert W. Tucker and his group began extensive work on game theory, nonlinear programming, and duality theory, becoming the leading researchers in this field [1].

General Lagrange Dual Problem

Consider the following optimization problem in standard form [2]:
\begin{align} \text{minimize} & ~~ f_0 (x)\\ \text{s.t} & ~~ f_i (x)\le 0 \quad i = 1, 2, ..., m \\ & ~~ h_i(x)\le 0 \quad i=1, 2,...,p \\ \end{align}

The Lagrangian is given by:
$L(x,\lambda,\nu)= f_0 + \sum_{i=1}^m \lambda_i f_i + \sum_{i=1}^p \nu_i h_i$

where $\lambda_i$ are Lagrange multipliers associated with $f_i (x)$, and $\nu_i$ are Lagrange multipliers associated with $h_i(x)$.

Then the Lagrange dual function is $g(\lambda, \nu) = \text{inf} L(x, \lambda, \nu)$ and the Lagrange dual problem is:
\begin{align} \text{maximize} & ~~ g(\lambda, \nu)\\ \text{s.t} & ~~ \lambda \ge 0. \\ \end{align}

Theorems

Weak and Strong Duality Theorem

Visual illustrating the weak duality theorem (top) and the strong duality theorem (bottom) [5]

The weak duality theorem says that for the general problem, the optimal value of the Lagrange dual problem ($d^*$) and the optimal value of the primal minimization problem ($p^*$) are related by:
$d^* \le p^*$

This means that the dual problem provides the lower bound for the primal problem. This theorem always holds. The difference between the two optimal values is called the optimal duality gap.

The strong duality theorem says that for convex problems that satisfy certain conditions, the optimal duality gap is zero, meaning that the optimal values of the primal and dual problems are the same. For convex problems to guarantee the strong duality condition, Slater's constraint qualifications must be met, meaning that a convex problem must be strictly feasible [3,4].

Complementary Slackness

Assuming that strong duality holds, $x^*$ is the optimal solution of the primal problem, and $(\lambda^*, \nu^*)$ are the optimal values of the dual problem, then
$\lambda_i^* f_i (x^*) = 0 \quad \text{for}\ i = 1,...,m$ [3,4].

Special Case: Linear Programming Problem

For linear programming problems, the construction of the dual problem is much simpler. The feasible solutions for the primal problem yield bounds on the optimal objective function value on the dual problem and vice versa [5].

Given the primal problem:
\begin{align} \text{maximize} & ~~ \sum_{j=1}^n c_j x_j\\ \text{s.t} & ~~ \sum_{j=1}^n a_{ij}x_j \le b_i \quad i = 1, 2, ..., m \\ & ~~ x_j \ge 0 \quad j = 1, 2, ...,n \\ \end{align}

the corresponding dual problem is:
\begin{align} \text{minimize} & ~~ \sum_{i=1}^m b_i y_i\\ \text{s.t} & ~~ \sum_{j=1}^m y_i a_{ij} \ge c_j\quad j = 1, 2, ..., n \\ & ~~ y_i \ge 0 \quad i = 1, 2, ...,m \\ \end{align}

Illustrative Example

For the following linear, convex primal problem,
\begin{align} \text{max} & ~~ 2x_1 + 3x_2\\ \text{s.t} & ~~ x_1 + x_2 \le 4\\ & ~~ x_1 + 2x_2 \le 6\\ & ~~ x_1 \ge 0\\ & ~~ x_2 \ge 0\\ \end{align}

the dual problem is:
\begin{align} \text{max} & ~~ 4y_1 + 6y_2\\ \text{s.t} & ~~ y_1 + y_2 \le -2\\ & ~~ y_1 + 2y_2 \le -3\\ & ~~ y_1 \le 0\\ & ~~ y_2 \le 0.\\ \end{align}

Economic Interpretation of the Dual Problem

Consider a primal problem in which the objective is to maximize profit from the production of some chemical. Constraints are the maximum availability of raw materials. The dual problem would be if an outside buyer wants to purchase all of the raw materials. The buyer would obviously want to minimize the cost of purchase, which would be the objective of the dual problem. Thus, the dual problem would determine the prices of each raw material that the raw material owner is willing to accept. The prices of the raw material must be high enough to exceed the profit that could have been earned by producing the chemical. Thus, the dual variables, which are often referred to as shadow prices, are related to the value of the resources available to the decision maker [6].

Significance

Nonlinear and linear duality is useful in several aspects of optimization problems [6]. For example, a dual solution can provide the bounds for the solution to the primal problem. For a case that satisfies the strong duality theorem (explained in subsequent subsections), the dual solution can prove optimality. The dual solution can also provide sensitivity information by signifying whether a primal constraint that corresponds to a dual variable is active and by how much. Furthermore, the optimal solution to a dual problem is a vector of Karush-Kuhn-Tucker (KKT) multipliers, which are used in KKT conditions for nonlinear programming problems to assure that a solution is optimal.

Real world applications that utilize Lagrangian duality theory include resource allocation problems, models of electrical networks, and models of economic markets [6]. In the resource allocation problem, the primal problem is the problem as seen by the production manager, while the dual problem is the problem as seen by the supplier. Both problems are linear programming problems, so the strong duality theorem applies. Thus, using the solutions to the two problems, the production levels and prices can be set to establish equilibrium. For models of electrical networks, the current flows can be represented as the primal variables and the voltage differences can be seen as the dual variables. This representation allows for the optimization of electrical networks. For models of economic markets, the production and consumption levels are the primal variables and the prices of goods and services can be seen as the dual variables.

Conclusion

Utilization of the principle of Lagrangian duality can be useful for problems that are hard to solve. By looking at the problem from a different perspective by transforming the original problem, the resulting problem may give a solution that is the bound to the original problem or is the same as the solution to the original problem.

References

1. Dantzig, G.B. Reminiscences About the Origins of Linear Programming, Stanford University, Stanford, CA, 1981.

2. Chiang, M. Optimization of Communication Systems: Convex Optimization and Lagrange Duality. Princeton University, Princeton, NJ, 2007.

3. Boyd, S.; Vandenberghe, L. Convex Optimization, 1st ed.; Cambridge University Press: New York, 2004.

4. Ghaoui, L. Convex Optimization and Applications: Weak Duality. University of California, Berkeley, Berkeley, CA, 2012.

5. Vanderbei, R. Linear Programming, 4th ed.; Springer: New York, 2014.

6. Freund, R. Applied Lagrange Duality for Constrained Optimization. Massachusetts Institute of Technology, Cambridge, MA, 2004.