# Difference between revisions of "Interior-point method for LP"

(→Barrier Method Algorithm) |
(→Barrier Method Algorithm) |
||

Line 10: | Line 10: | ||

There are two important interior point algorithms: the barrier method and primal-dual IP method. The primal-dual method is usually preferred due to its efficiency and accuracy. Major differences between the two methods are as follows. There is only one loop/iteration in primal-dual because there is no distinction between outer and inner iterations as with the barrier method. In primal-dual, the primal and dual iterates do not have to be feasible.[3] | There are two important interior point algorithms: the barrier method and primal-dual IP method. The primal-dual method is usually preferred due to its efficiency and accuracy. Major differences between the two methods are as follows. There is only one loop/iteration in primal-dual because there is no distinction between outer and inner iterations as with the barrier method. In primal-dual, the primal and dual iterates do not have to be feasible.[3] | ||

− | |||

− | |||

− | |||

− | |||

− | |||

− | |||

− | |||

− | |||

− | |||

− | |||

− | |||

− | |||

− | |||

− | |||

Given strictly feasible <math>x, t:=t^0>0,\mu >1, \epsilon<0</math><br> | Given strictly feasible <math>x, t:=t^0>0,\mu >1, \epsilon<0</math><br> | ||

+ | '''repeat'''<br> | ||

1. Compute <math>x^*(t)</math> by minimizing <math>tf_0 + \phi </math> subject to <math>Ax = b</math>, starting at x.<br> | 1. Compute <math>x^*(t)</math> by minimizing <math>tf_0 + \phi </math> subject to <math>Ax = b</math>, starting at x.<br> | ||

2. Update <math>x:=x^*(t)</math>.<br> | 2. Update <math>x:=x^*(t)</math>.<br> |

## Revision as of 17:06, 25 May 2014

Authors: John Plaxco, Alex Valdes, Wojciech Stojko. (ChE 345 Spring 2014)

Steward: Dajun Yue, Fengqi You

Date Presented: May 25, 2014

## Contents |

# Introduction and Uses

Interior point methods are a type of algorithm that are used in solving both linear and nonlinear convex optimization problems that contain inequalities as constraints. The LP Interior-Point method relies on having a linear programming model with the objective function and all constraints being continuous and twice continuously differentiable. In general, a problem is assumed to be strictly feasible, and will have a dual optimal that will satisfy Karush-Kuhn-Tucker (KKT) constraints described below. The problem is solved (assuming there IS a solution) either by iteratively solving for KKT conditions or to the original problem with equality instead of inequality constraints, and then applying Newton's method to these conditions.

Interior point methods came about from a desire for algorithms with better theoretical bases than the simplex method. While the two strategies are similar in a few ways, the interior point methods involve relatively expensive (in terms of computing) iterations that quickly close in on a solution, while the simplex method involves usually requires many more inexpensive iterations. From a geometric standpoint, interior point methods approach a solution from the interior or exterior of the feasible region, but are never on the boundary.[1]

There are two important interior point algorithms: the barrier method and primal-dual IP method. The primal-dual method is usually preferred due to its efficiency and accuracy. Major differences between the two methods are as follows. There is only one loop/iteration in primal-dual because there is no distinction between outer and inner iterations as with the barrier method. In primal-dual, the primal and dual iterates do not have to be feasible.[3]

Given strictly feasible

**repeat**

1. Compute by minimizing subject to , starting at x.

2. Update .

3. Quit if , else

4. Increase

# Primal-Dual IP Algorithm

The primal-dual interior-point method can easily be understood by using the simplest NLP problem; one with only inequality constraints. Consider the following:

minimize s.t. .

# Example

# Conclusion

## Sources

1. R.J. Vanderbei, Linear Programming: Foundations and Extensions (Chp 17-22). Springer, 2008.

2. J. Nocedal, S. J. Wright, Numerical optimization (Chp 14). Springer, 1999.

3. S. Boyd, L. Vandenberghe, Convex Optimization (Chp 11). Cambridge University Press, 2009