https://optimization.mccormick.northwestern.edu/api.php?action=feedcontributions&user=Voitek&feedformat=atomoptimization - User contributions [en]2022-01-17T22:34:04ZUser contributionsMediaWiki 1.21.3https://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_LPInterior-point method for LP2014-05-26T00:42:25Z<p>Voitek: /* Conclusion */</p>
<hr />
<div><br />
Authors: John Plaxco, Alex Valdes, Wojciech Stojko. (ChE 345 Spring 2014) <br><br />
Steward: Dajun Yue, Fengqi You<br><br />
Date Presented: May 25, 2014 <br><br />
=Introduction and Uses=<br />
Interior point methods are a type of algorithm that are used in solving both linear and nonlinear convex optimization problems that contain inequalities as constraints. The LP Interior-Point method relies on having a linear programming model with the objective function and all constraints being continuous and twice continuously differentiable. In general, a problem is assumed to be strictly feasible, and will have a dual optimal that will satisfy Karush-Kuhn-Tucker (KKT) constraints described below. The problem is solved (assuming there IS a solution) either by iteratively solving for KKT conditions or to the original problem with equality instead of inequality constraints, and then applying Newton's method to these conditions.<br />
<br />
Interior point methods came about from a desire for algorithms with better theoretical bases than the simplex method. While the two strategies are similar in a few ways, the interior point methods involve relatively expensive (in terms of computing) iterations that quickly close in on a solution, while the simplex method involves usually requires many more inexpensive iterations. From a geometric standpoint, interior point methods approach a solution from the interior or exterior of the feasible region, but are never on the boundary.[1]<br />
<br />
There are two important interior point algorithms: the barrier method and primal-dual IP method. The primal-dual method is usually preferred due to its efficiency and accuracy. Major differences between the two methods are as follows. There is only one loop/iteration in primal-dual because there is no distinction between outer and inner iterations as with the barrier method. In primal-dual, the primal and dual iterates do not have to be feasible.[3]<br />
<br />
=Barrier Method=<br />
For the barrier method algorithm, there a few approximations that must be made. Given a problem in the form of<br><br />
Minimize <math>f_0(x)</math><br><br />
subject to <math> f_i(x) \leq 0</math><br><br />
<math>Ax=b</math><br><br />
we must reformulate it to implicitly include the inequalities in the objective function. We can do this by creating a function that greatly increases the objective if a constraint is not met. Our conditions then changes to<br><br />
minimize <math> f_0(x) + \sum_i^m I_-(f_i(x))</math><br><br />
st <math> Ax=b</math><br><br />
where <math> I_-(x)=\left|\begin{array}{cc}0& x \leq 0\\ \infty & x > 0\end{array}\right|</math><br><br><br />
This problem, however, is not continuous. A modification can be made by approximating <math>I_-(x)</math> as a logarithm log(-x), which approaches infinity when x approaches 0 as we want, and makes all functions twice differentiable. We then put the logarithm over a variable that sets a level of accuracy for the approximation we make. Here we will call that variable t. We define<br><br />
<math>\phi(x)=-\sum_i^mlog(-f_i(x))</math><br><br />
which blows up if any of our constraints are violated. Our LP problem now becomes<br><br />
minimize <math> f_0(x) + \frac{1}{t}\phi(x)</math><br><br />
st <math> Ax=b</math><br><br />
<br><br />
This allows us to use Newton's method to follow what is called a Central Path, which is a series of points we iterate through that all satisfy the equality constraints <math>Ax=b</math> from the original problem, but give increasingly more optimized values for the objective function, with the inequality constraints <math>f_i(x)</math> not necessarily equal to 0.<br />
==Algorithm==<br />
Given strictly feasible <math>x, t:=t^0>0,\mu >1, \epsilon<0</math><br><br />
'''repeat'''<br><br />
1. Compute <math>x^*(t)</math> by minimizing <math>tf_0 + \phi </math> subject to <math>Ax = b</math>, starting at x.<br><br />
2. Update <math>x:=x^*(t)</math>.<br><br />
3. Quit if <math>\frac{m}{t} \leq \epsilon</math>, else<br><br />
4. Increase <math>t:=\mu t</math>[3]<br />
<br />
==Example==<br />
min <math> f(x) = x_1 - 2x_2 </math><br><br />
s.t. 1 + x_1 - (x_2)^2<br />
<br />
=Primal-Dual IP Algorithm=<br />
<br />
The primal-dual interior-point method can easily be understood by using the simplest NLP problem; one with only inequality constraints. <br />
Consider the following:<br />
<br />
minimize <math>f(x)</math> <br />
<br />
s.t. <math>g_i(x)\ge 0</math> with <math>i = 1,...,m</math>.<br />
<br />
We now introduce slack variables to turn all inequalities into non-negativities:<br />
<br />
minimize <math>f(x)</math> <br />
<br />
s.t. <math>g(x)- s = 0</math> with <math>s \ge 0</math>.<br />
<br />
The logrithmic barrier function is now introduced:<br />
<br />
minimize <math> f(x) - \mu~ \sum_{i=1}^m\log(s_i)</math> <br />
<br />
s.t. <math>h(x)-s=0</math><br />
<br />
Now incorporate the equality constraint(s) into the objective function using Lagrange multipliers:<br />
<br />
minimize <math>f(x) - \mu~ \sum_{i=1}^m\log(s_i) - y^T(g(x) - s)</math><br />
<br />
Next, set all of the derivatives equal to 0:<br />
<br />
<math>\nabla f(x) - \nabla g(x)^Ty = 0</math><br />
<br />
<math>- \mu W^{-1} e + y = 0</math><br />
<br />
<math>g(x) - s = 0</math><br />
<br />
Rearranging:<br />
<br />
<math> </math><br />
=Example=<br />
<br />
=Conclusion=<br />
The Interior Point method is a method that approximates the constraints of a linear programming model as a set of boundaries surrounding a region. These approximations are used when the problem has constraints that are discontinuous or otherwise troublesome, but can me modified so that a linear solver can handle them. Once the problem is formulated in the correct way, Newton's method is used to iteratively approach more and more optimal solutions within the feasible region.<br />
<br />
= Sources =<br />
1. R.J. Vanderbei, Linear Programming: Foundations and Extensions (Chp 17-22). Springer, 2008.<br><br />
2. J. Nocedal, S. J. Wright, Numerical optimization (Chp 14). Springer, 1999. <br><br />
3. S. Boyd, L. Vandenberghe, Convex Optimization (Chp 11). Cambridge University Press, 2009</div>Voitekhttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_LPInterior-point method for LP2014-05-26T00:37:28Z<p>Voitek: /* Conclusion */</p>
<hr />
<div><br />
Authors: John Plaxco, Alex Valdes, Wojciech Stojko. (ChE 345 Spring 2014) <br><br />
Steward: Dajun Yue, Fengqi You<br><br />
Date Presented: May 25, 2014 <br><br />
=Introduction and Uses=<br />
Interior point methods are a type of algorithm that are used in solving both linear and nonlinear convex optimization problems that contain inequalities as constraints. The LP Interior-Point method relies on having a linear programming model with the objective function and all constraints being continuous and twice continuously differentiable. In general, a problem is assumed to be strictly feasible, and will have a dual optimal that will satisfy Karush-Kuhn-Tucker (KKT) constraints described below. The problem is solved (assuming there IS a solution) either by iteratively solving for KKT conditions or to the original problem with equality instead of inequality constraints, and then applying Newton's method to these conditions.<br />
<br />
Interior point methods came about from a desire for algorithms with better theoretical bases than the simplex method. While the two strategies are similar in a few ways, the interior point methods involve relatively expensive (in terms of computing) iterations that quickly close in on a solution, while the simplex method involves usually requires many more inexpensive iterations. From a geometric standpoint, interior point methods approach a solution from the interior or exterior of the feasible region, but are never on the boundary.[1]<br />
<br />
There are two important interior point algorithms: the barrier method and primal-dual IP method. The primal-dual method is usually preferred due to its efficiency and accuracy. Major differences between the two methods are as follows. There is only one loop/iteration in primal-dual because there is no distinction between outer and inner iterations as with the barrier method. In primal-dual, the primal and dual iterates do not have to be feasible.[3]<br />
<br />
=Barrier Method=<br />
For the barrier method algorithm, there a few approximations that must be made. Given a problem in the form of<br><br />
Minimize <math>f_0(x)</math><br><br />
subject to <math> f_i(x) \leq 0</math><br><br />
<math>Ax=b</math><br><br />
we must reformulate it to implicitly include the inequalities in the objective function. We can do this by creating a function that greatly increases the objective if a constraint is not met. Our conditions then changes to<br><br />
minimize <math> f_0(x) + \sum_i^m I_-(f_i(x))</math><br><br />
st <math> Ax=b</math><br><br />
where <math> I_-(x)=\left|\begin{array}{cc}0& x \leq 0\\ \infty & x > 0\end{array}\right|</math><br><br><br />
This problem, however, is not continuous. A modification can be made by approximating <math>I_-(x)</math> as a logarithm log(-x), which approaches infinity when x approaches 0 as we want, and makes all functions twice differentiable. We then put the logarithm over a variable that sets a level of accuracy for the approximation we make. Here we will call that variable t. We define<br><br />
<math>\phi(x)=-\sum_i^mlog(-f_i(x))</math><br><br />
which blows up if any of our constraints are violated. Our LP problem now becomes<br><br />
minimize <math> f_0(x) + \frac{1}{t}\phi(x)</math><br><br />
st <math> Ax=b</math><br><br />
<br><br />
This allows us to use Newton's method to follow what is called a Central Path, which is a series of points we iterate through that all satisfy the equality constraints <math>Ax=b</math> from the original problem, but give increasingly more optimized values for the objective function, with the inequality constraints <math>f_i(x)</math> not necessarily equal to 0.<br />
==Algorithm==<br />
Given strictly feasible <math>x, t:=t^0>0,\mu >1, \epsilon<0</math><br><br />
'''repeat'''<br><br />
1. Compute <math>x^*(t)</math> by minimizing <math>tf_0 + \phi </math> subject to <math>Ax = b</math>, starting at x.<br><br />
2. Update <math>x:=x^*(t)</math>.<br><br />
3. Quit if <math>\frac{m}{t} \leq \epsilon</math>, else<br><br />
4. Increase <math>t:=\mu t</math>[3]<br />
<br />
==Example==<br />
min <math> f(x) = x_1 - 2x_2 </math><br><br />
s.t. 1 + x_1 - (x_2)^2<br />
<br />
=Primal-Dual IP Algorithm=<br />
<br />
The primal-dual interior-point method can easily be understood by using the simplest NLP problem; one with only inequality constraints. <br />
Consider the following:<br />
<br />
minimize <math>f(x)</math> <br />
<br />
s.t. <math>g_i(x)\ge 0</math> with <math>i = 1,...,m</math>.<br />
<br />
We now introduce slack variables to turn all inequalities into non-negativities:<br />
<br />
minimize <math>f(x)</math> <br />
<br />
s.t. <math>g(x)- s = 0</math> with <math>s \ge 0</math>.<br />
<br />
The logrithmic barrier function is now introduced:<br />
<br />
minimize <math> f(x) - \mu~ \sum_{i=1}^m\log(s_i)</math> <br />
<br />
s.t. <math>h(x)-s=0</math><br />
<br />
Now incorporate the equality constraint(s) into the objective function using Lagrange multipliers:<br />
<br />
minimize <math>f(x) - \mu~ \sum_{i=1}^m\log(s_i) - y^T(g(x) - s)</math><br />
<br />
Next, set all of the derivatives equal to 0:<br />
<br />
<math>\nabla f(x) - \nabla g(x)^Ty = 0</math><br />
<br />
<math>- \mu W^{-1} e + y = 0</math><br />
<br />
<math>g(x) - s = 0</math><br />
<br />
Rearranging:<br />
<br />
<math> </math><br />
=Example=<br />
<br />
=Conclusion=<br />
<br />
<br />
<br />
= Sources =<br />
1. R.J. Vanderbei, Linear Programming: Foundations and Extensions (Chp 17-22). Springer, 2008.<br><br />
2. J. Nocedal, S. J. Wright, Numerical optimization (Chp 14). Springer, 1999. <br><br />
3. S. Boyd, L. Vandenberghe, Convex Optimization (Chp 11). Cambridge University Press, 2009</div>Voitekhttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_LPInterior-point method for LP2014-05-26T00:16:44Z<p>Voitek: /* Example */</p>
<hr />
<div><br />
Authors: John Plaxco, Alex Valdes, Wojciech Stojko. (ChE 345 Spring 2014) <br><br />
Steward: Dajun Yue, Fengqi You<br><br />
Date Presented: May 25, 2014 <br><br />
=Introduction and Uses=<br />
Interior point methods are a type of algorithm that are used in solving both linear and nonlinear convex optimization problems that contain inequalities as constraints. The LP Interior-Point method relies on having a linear programming model with the objective function and all constraints being continuous and twice continuously differentiable. In general, a problem is assumed to be strictly feasible, and will have a dual optimal that will satisfy Karush-Kuhn-Tucker (KKT) constraints described below. The problem is solved (assuming there IS a solution) either by iteratively solving for KKT conditions or to the original problem with equality instead of inequality constraints, and then applying Newton's method to these conditions.<br />
<br />
Interior point methods came about from a desire for algorithms with better theoretical bases than the simplex method. While the two strategies are similar in a few ways, the interior point methods involve relatively expensive (in terms of computing) iterations that quickly close in on a solution, while the simplex method involves usually requires many more inexpensive iterations. From a geometric standpoint, interior point methods approach a solution from the interior or exterior of the feasible region, but are never on the boundary.[1]<br />
<br />
There are two important interior point algorithms: the barrier method and primal-dual IP method. The primal-dual method is usually preferred due to its efficiency and accuracy. Major differences between the two methods are as follows. There is only one loop/iteration in primal-dual because there is no distinction between outer and inner iterations as with the barrier method. In primal-dual, the primal and dual iterates do not have to be feasible.[3]<br />
<br />
=Barrier Method=<br />
For the barrier method algorithm, there a few approximations that must be made. Given a problem in the form of<br><br />
Minimize <math>f_0(x)</math><br><br />
subject to <math> f_i(x) \leq 0</math><br><br />
<math>Ax=b</math><br><br />
we must reformulate it to implicitly include the inequalities in the objective function. We can do this by creating a function that greatly increases the objective if a constraint is not met. Our conditions then changes to<br><br />
minimize <math> f_0(x) + \sum_i^m I_-(f_i(x))</math><br><br />
st <math> Ax=b</math><br><br />
where <math> I_-(x)=\left|\begin{array}{cc}0& x \leq 0\\ \infty & x > 0\end{array}\right|</math><br><br><br />
This problem, however, is not continuous. A modification can be made by approximating <math>I_-(x)</math> as a logarithm log(-x), which approaches infinity when x approaches 0 as we want, and makes all functions twice differentiable. We then put the logarithm over a variable that sets a level of accuracy for the approximation we make. Here we will call that variable t. We define<br><br />
<math>\phi(x)=-\sum_i^mlog(-f_i(x))</math><br><br />
which blows up if any of our constraints are violated. Our LP problem now becomes<br><br />
minimize <math> f_0(x) + \frac{1}{t}\phi(x)</math><br><br />
st <math> Ax=b</math><br><br />
<br><br />
This allows us to use Newton's method to follow what is called a Central Path, which is a series of points we iterate through that all satisfy the equality constraints <math>Ax=b</math> from the original problem, but give increasingly more optimized values for the objective function, with the inequality constraints <math>f_i(x)</math> not necessarily equal to 0.<br />
==Algorithm==<br />
Given strictly feasible <math>x, t:=t^0>0,\mu >1, \epsilon<0</math><br><br />
'''repeat'''<br><br />
1. Compute <math>x^*(t)</math> by minimizing <math>tf_0 + \phi </math> subject to <math>Ax = b</math>, starting at x.<br><br />
2. Update <math>x:=x^*(t)</math>.<br><br />
3. Quit if <math>\frac{m}{t} \leq \epsilon</math>, else<br><br />
4. Increase <math>t:=\mu t</math>[3]<br />
<br />
==Example==<br />
min <math> f(x) = x_1 - 2x_2 </math><br><br />
s.t. 1 + x_1 - (x_2)^2<br />
<br />
=Primal-Dual IP Algorithm=<br />
<br />
The primal-dual interior-point method can easily be understood by using the simplest NLP problem; one with only inequality constraints. <br />
Consider the following:<br />
<br />
minimize <math>f(x)</math> <br />
<br />
s.t. <math>g_i(x)\ge 0</math> with <math>i = 1,...,m</math>.<br />
<br />
We now introduce slack variables to turn all inequalities into non-negativities:<br />
<br />
minimize <math>f(x)</math> <br />
<br />
s.t. <math>g(x)- s = 0</math> with <math>s \ge 0</math>.<br />
<br />
The logrithmic barrier function is now introduced:<br />
<br />
minimize <math> f(x) - \mu~ \sum_{i=1}^m\log(s_i)</math> <br />
<br />
s.t. <math>h(x)-s=0</math><br />
<br />
Now incorporate the equality constraint(s) into the objective function using Lagrange multipliers:<br />
<br />
minimize <math>f(x) - \mu~ \sum_{i=1}^m\log(s_i) - y^T(g(x) - s)</math><br />
<br />
Next, set all of the derivatives equal to 0:<br />
<br />
<math>\nabla f(x) - \nabla g(x)^Ty = 0</math><br />
<br />
<math>- \mu W^{-1} e + y = 0</math><br />
<br />
<math>g(x) - s = 0</math><br />
<br />
Rearranging:<br />
<br />
<math> </math><br />
=Example=<br />
<br />
=Conclusion=<br />
<br />
<br />
<br />
== Sources ==<br />
1. R.J. Vanderbei, Linear Programming: Foundations and Extensions (Chp 17-22). Springer, 2008.<br><br />
2. J. Nocedal, S. J. Wright, Numerical optimization (Chp 14). Springer, 1999. <br><br />
3. S. Boyd, L. Vandenberghe, Convex Optimization (Chp 11). Cambridge University Press, 2009</div>Voitekhttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_LPInterior-point method for LP2014-05-25T22:43:24Z<p>Voitek: /* Barrier Method Algorithm */</p>
<hr />
<div><br />
Authors: John Plaxco, Alex Valdes, Wojciech Stojko. (ChE 345 Spring 2014) <br><br />
Steward: Dajun Yue, Fengqi You<br><br />
Date Presented: May 25, 2014 <br><br />
=Introduction and Uses=<br />
Interior point methods are a type of algorithm that are used in solving both linear and nonlinear convex optimization problems that contain inequalities as constraints. The LP Interior-Point method relies on having a linear programming model with the objective function and all constraints being continuous and twice continuously differentiable. In general, a problem is assumed to be strictly feasible, and will have a dual optimal that will satisfy Karush-Kuhn-Tucker (KKT) constraints described below. The problem is solved (assuming there IS a solution) either by iteratively solving for KKT conditions or to the original problem with equality instead of inequality constraints, and then applying Newton's method to these conditions.<br />
<br />
Interior point methods came about from a desire for algorithms with better theoretical bases than the simplex method. While the two strategies are similar in a few ways, the interior point methods involve relatively expensive (in terms of computing) iterations that quickly close in on a solution, while the simplex method involves usually requires many more inexpensive iterations. From a geometric standpoint, interior point methods approach a solution from the interior or exterior of the feasible region, but are never on the boundary.[1]<br />
<br />
There are two important interior point algorithms: the barrier method and primal-dual IP method. The primal-dual method is usually preferred due to its efficiency and accuracy. Major differences between the two methods are as follows. There is only one loop/iteration in primal-dual because there is no distinction between outer and inner iterations as with the barrier method. In primal-dual, the primal and dual iterates do not have to be feasible.[3]<br />
<br />
=Barrier Method=<br />
For the barrier method algorithm, there a few approximations that must be made. Given a problem in the form of<br><br />
Minimize <math>f_0(x)</math><br><br />
subject to <math> f_i(x) \leq 0</math><br><br />
<math>Ax=b</math><br><br />
we must reformulate it to implicitly include the inequalities in the objective function. We can do this by creating a function that greatly increases the objective if a constraint is not met. Our conditions then changes to<br><br />
minimize <math> f_0(x) + \sum_i^m I_-(f_i(x))</math><br><br />
st <math> Ax=b</math><br><br />
where <math> I_-(x)=\left|\begin{array}{cc}0& x \leq 0\\ \infty & x > 0\end{array}\right|</math><br><br><br />
This problem, however, is not continuous. A modification can be made by approximating <math>I_-(x)</math> as a logarithm log(-x), which approaches infinity when x approaches 0 as we want, and makes all functions twice differentiable. We then put the logarithm over a variable that sets a level of accuracy for the approximation we make. Here we will call that variable t. We define<br><br />
<math>\phi(x)=-\sum_i^mlog(-f_i(x))</math><br><br />
which blows up if any of our constraints are violated. Our LP problem now becomes<br><br />
minimize <math> f_0(x) + \frac{1}{t}\phi(x)</math><br><br />
st <math> Ax=b</math><br><br />
<br><br />
This allows us to use Newton's method to follow what is called a Central Path, which is a series of points we iterate through that all satisfy the equality constraints <math>Ax=b</math> from the original problem, but give increasingly more optimized values for the objective function, with the inequality constraints <math>f_i(x)</math> not necessarily equal to 0.<br />
==Algorithm==<br />
Given strictly feasible <math>x, t:=t^0>0,\mu >1, \epsilon<0</math><br><br />
'''repeat'''<br><br />
1. Compute <math>x^*(t)</math> by minimizing <math>tf_0 + \phi </math> subject to <math>Ax = b</math>, starting at x.<br><br />
2. Update <math>x:=x^*(t)</math>.<br><br />
3. Quit if <math>\frac{m}{t} \leq \epsilon</math>, else<br><br />
4. Increase <math>t:=\mu t</math>[3]<br />
<br />
=Primal-Dual IP Algorithm=<br />
The primal-dual interior-point method can easily be understood by using the simplest NLP problem; one with only inequality constraints. <br />
Consider the following:<br />
<br />
minimize <math>f(x)</math> <br />
s.t. <math>g(x) \ge 0~~ x \in \mathbb{R}^n, g(x) \in \mathbb{R}^m~~~~~~(1)</math>.<br />
=Example=<br />
<br />
=Conclusion=<br />
<br />
<br />
<br />
== Sources ==<br />
1. R.J. Vanderbei, Linear Programming: Foundations and Extensions (Chp 17-22). Springer, 2008.<br><br />
2. J. Nocedal, S. J. Wright, Numerical optimization (Chp 14). Springer, 1999. <br><br />
3. S. Boyd, L. Vandenberghe, Convex Optimization (Chp 11). Cambridge University Press, 2009</div>Voitekhttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_LPInterior-point method for LP2014-05-25T22:38:37Z<p>Voitek: /* Barrier Method Algorithm */</p>
<hr />
<div><br />
Authors: John Plaxco, Alex Valdes, Wojciech Stojko. (ChE 345 Spring 2014) <br><br />
Steward: Dajun Yue, Fengqi You<br><br />
Date Presented: May 25, 2014 <br><br />
=Introduction and Uses=<br />
Interior point methods are a type of algorithm that are used in solving both linear and nonlinear convex optimization problems that contain inequalities as constraints. The LP Interior-Point method relies on having a linear programming model with the objective function and all constraints being continuous and twice continuously differentiable. In general, a problem is assumed to be strictly feasible, and will have a dual optimal that will satisfy Karush-Kuhn-Tucker (KKT) constraints described below. The problem is solved (assuming there IS a solution) either by iteratively solving for KKT conditions or to the original problem with equality instead of inequality constraints, and then applying Newton's method to these conditions.<br />
<br />
Interior point methods came about from a desire for algorithms with better theoretical bases than the simplex method. While the two strategies are similar in a few ways, the interior point methods involve relatively expensive (in terms of computing) iterations that quickly close in on a solution, while the simplex method involves usually requires many more inexpensive iterations. From a geometric standpoint, interior point methods approach a solution from the interior or exterior of the feasible region, but are never on the boundary.[1]<br />
<br />
There are two important interior point algorithms: the barrier method and primal-dual IP method. The primal-dual method is usually preferred due to its efficiency and accuracy. Major differences between the two methods are as follows. There is only one loop/iteration in primal-dual because there is no distinction between outer and inner iterations as with the barrier method. In primal-dual, the primal and dual iterates do not have to be feasible.[3]<br />
<br />
=Barrier Method Algorithm=<br />
For the barrier method algorithm, there a few approximations that must be made. Given a problem in the form of<br><br />
Minimize <math>f_0(x)</math><br><br />
subject to <math> f_i(x) \leq 0</math><br><br />
<math>Ax=b</math><br><br />
we must reformulate it to implicitly include the inequalities in the objective function. We can do this by creating a function that greatly increases the objective if a constraint is not met. Our conditions then changes to<br><br />
minimize <math> f_0(x) + \sum_i^m I_-(f_i(x))</math><br><br />
st <math> Ax=b</math><br><br />
where <math> I_-(x)=\left|\begin{array}{cc}0& x \leq 0\\ \infty & x > 0\end{array}\right|</math><br><br><br />
This problem, however, is not continuous. A modification can be made by approximating <math>I_-(x)</math> as a logarithm log(-x), which approaches infinity when x approaches 0 as we want, and makes all functions twice differentiable. We then put the logarithm over a variable that sets a level of accuracy for the approximation we make. Here we will call that variable t. We define<br><br />
<math>\phi(x)=-\sum_i^mlog(-f_i(x))</math><br><br />
which blows up if any of our constraints are violated. Our LP problem now becomes<br><br />
minimize <math> f_0(x) + \frac{1}{t}\phi(x)</math><br><br />
st <math> Ax=b</math><br><br />
<br />
==Algorithm==<br />
Given strictly feasible <math>x, t:=t^0>0,\mu >1, \epsilon<0</math><br><br />
'''repeat'''<br><br />
1. Compute <math>x^*(t)</math> by minimizing <math>tf_0 + \phi </math> subject to <math>Ax = b</math>, starting at x.<br><br />
2. Update <math>x:=x^*(t)</math>.<br><br />
3. Quit if <math>\frac{m}{t} \leq \epsilon</math>, else<br><br />
4. Increase <math>t:=\mu t</math>[3]<br />
<br />
=Primal-Dual IP Algorithm=<br />
The primal-dual interior-point method can easily be understood by using the simplest NLP problem; one with only inequality constraints. <br />
Consider the following:<br />
<br />
minimize <math>f(x)</math> <br />
s.t. <math>g(x) \ge 0~~ x \in \mathbb{R}^n, g(x) \in \mathbb{R}^m~~~~~~(1)</math>.<br />
=Example=<br />
<br />
=Conclusion=<br />
<br />
<br />
<br />
== Sources ==<br />
1. R.J. Vanderbei, Linear Programming: Foundations and Extensions (Chp 17-22). Springer, 2008.<br><br />
2. J. Nocedal, S. J. Wright, Numerical optimization (Chp 14). Springer, 1999. <br><br />
3. S. Boyd, L. Vandenberghe, Convex Optimization (Chp 11). Cambridge University Press, 2009</div>Voitekhttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_LPInterior-point method for LP2014-05-25T22:37:07Z<p>Voitek: /* Barrier Method Algorithm */</p>
<hr />
<div><br />
Authors: John Plaxco, Alex Valdes, Wojciech Stojko. (ChE 345 Spring 2014) <br><br />
Steward: Dajun Yue, Fengqi You<br><br />
Date Presented: May 25, 2014 <br><br />
=Introduction and Uses=<br />
Interior point methods are a type of algorithm that are used in solving both linear and nonlinear convex optimization problems that contain inequalities as constraints. The LP Interior-Point method relies on having a linear programming model with the objective function and all constraints being continuous and twice continuously differentiable. In general, a problem is assumed to be strictly feasible, and will have a dual optimal that will satisfy Karush-Kuhn-Tucker (KKT) constraints described below. The problem is solved (assuming there IS a solution) either by iteratively solving for KKT conditions or to the original problem with equality instead of inequality constraints, and then applying Newton's method to these conditions.<br />
<br />
Interior point methods came about from a desire for algorithms with better theoretical bases than the simplex method. While the two strategies are similar in a few ways, the interior point methods involve relatively expensive (in terms of computing) iterations that quickly close in on a solution, while the simplex method involves usually requires many more inexpensive iterations. From a geometric standpoint, interior point methods approach a solution from the interior or exterior of the feasible region, but are never on the boundary.[1]<br />
<br />
There are two important interior point algorithms: the barrier method and primal-dual IP method. The primal-dual method is usually preferred due to its efficiency and accuracy. Major differences between the two methods are as follows. There is only one loop/iteration in primal-dual because there is no distinction between outer and inner iterations as with the barrier method. In primal-dual, the primal and dual iterates do not have to be feasible.[3]<br />
<br />
=Barrier Method Algorithm=<br />
For the barrier method algorithm, there a few approximations that must be made. Given a problem in the form of<br><br />
Minimize <math>f_0(x)</math><br><br />
subject to <math> f_i(x) \leq 0</math><br><br />
<math>Ax=b</math><br><br />
we must reformulate it to implicitly include the inequalities in the objective function. We can do this by creating a function that greatly increases the objective if a constraint is not met. Our conditions then changes to<br><br />
minimize <math> f_0(x) + \sum_i^m I_-(f_i(x))</math><br><br />
st <math> Ax=b</math><br><br />
where <math> I_-(x)=\left|\begin{array}{cc}0& x \leq 0\\ \infty & x > 0\end{array}\right|</math><br><br><br />
This problem, however, is not continuous. A modification can be made by approximating <math>I_-(x)</math> as a logarithm log(-x), which approaches infinity when x approaches 0 as we want. We then put the logarithm over a variable that sets a level of accuracy for the approximation we make. Here we will call that variable t. We define<br><br />
<math>\phi(x)=-\sum_i^mlog(-f_i(x))</math><br><br />
which blows up if any of our constraints are violated. Our LP problem now becomes<br><br />
minimize <math> f_0(x) + \frac{1}{t}\phi(x)</math><br><br />
st <math> Ax=b</math><br><br />
<br />
==Algorithm==<br />
Given strictly feasible <math>x, t:=t^0>0,\mu >1, \epsilon<0</math><br><br />
'''repeat'''<br><br />
1. Compute <math>x^*(t)</math> by minimizing <math>tf_0 + \phi </math> subject to <math>Ax = b</math>, starting at x.<br><br />
2. Update <math>x:=x^*(t)</math>.<br><br />
3. Quit if <math>\frac{m}{t} \leq \epsilon</math>, else<br><br />
4. Increase <math>t:=\mu t</math>[3]<br />
<br />
=Primal-Dual IP Algorithm=<br />
The primal-dual interior-point method can easily be understood by using the simplest NLP problem; one with only inequality constraints. <br />
Consider the following:<br />
<br />
minimize <math>f(x)</math> <br />
s.t. <math>g(x) \ge 0~~ x \in \mathbb{R}^n, g(x) \in \mathbb{R}^m~~~~~~(1)</math>.<br />
=Example=<br />
<br />
=Conclusion=<br />
<br />
<br />
<br />
== Sources ==<br />
1. R.J. Vanderbei, Linear Programming: Foundations and Extensions (Chp 17-22). Springer, 2008.<br><br />
2. J. Nocedal, S. J. Wright, Numerical optimization (Chp 14). Springer, 1999. <br><br />
3. S. Boyd, L. Vandenberghe, Convex Optimization (Chp 11). Cambridge University Press, 2009</div>Voitekhttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_LPInterior-point method for LP2014-05-25T22:36:47Z<p>Voitek: /* Barrier Method Algorithm */</p>
<hr />
<div><br />
Authors: John Plaxco, Alex Valdes, Wojciech Stojko. (ChE 345 Spring 2014) <br><br />
Steward: Dajun Yue, Fengqi You<br><br />
Date Presented: May 25, 2014 <br><br />
=Introduction and Uses=<br />
Interior point methods are a type of algorithm that are used in solving both linear and nonlinear convex optimization problems that contain inequalities as constraints. The LP Interior-Point method relies on having a linear programming model with the objective function and all constraints being continuous and twice continuously differentiable. In general, a problem is assumed to be strictly feasible, and will have a dual optimal that will satisfy Karush-Kuhn-Tucker (KKT) constraints described below. The problem is solved (assuming there IS a solution) either by iteratively solving for KKT conditions or to the original problem with equality instead of inequality constraints, and then applying Newton's method to these conditions.<br />
<br />
Interior point methods came about from a desire for algorithms with better theoretical bases than the simplex method. While the two strategies are similar in a few ways, the interior point methods involve relatively expensive (in terms of computing) iterations that quickly close in on a solution, while the simplex method involves usually requires many more inexpensive iterations. From a geometric standpoint, interior point methods approach a solution from the interior or exterior of the feasible region, but are never on the boundary.[1]<br />
<br />
There are two important interior point algorithms: the barrier method and primal-dual IP method. The primal-dual method is usually preferred due to its efficiency and accuracy. Major differences between the two methods are as follows. There is only one loop/iteration in primal-dual because there is no distinction between outer and inner iterations as with the barrier method. In primal-dual, the primal and dual iterates do not have to be feasible.[3]<br />
<br />
=Barrier Method Algorithm=<br />
For the barrier method algorithm, there a few approximations that must be made. Given a problem in the form of<br><br />
Minimize <math>f_0(x)</math><br><br />
subject to <math> f_i(x) \leq 0</math><br><br />
<math>Ax=b</math><br><br />
we must reformulate it to implicitly include the inequalities in the objective function. We can do this by creating a function that greatly increases the objective if a constraint is not met. Our conditions then changes to<br><br />
minimize <math> f_0(x) + \sum_i^m I_-(f_i(x))</math><br><br />
st <math> Ax=b</math><br><br />
where <math> I_-(x)=\left|\begin{array}{cc}0& x \leq 0\\ \infty & x > 0\end{array}\right|</math><br><br><br />
This problem, however, is not continuous. A modification can be made by approximating <math>I_-(x)</math> as a logarithm log(-x), which approaches infinity when x approaches 0 as we want. We then put the logarithm over a variable that sets a level of accuracy for the approximation we make. Here we will call that variable t. We define<br><br />
<math>\phi(x)=-\sum_i^mlog(-f_i(x))</math><br><br />
which blows up if any of our constraints are violated. Our LP problem now becomes<br><br />
minimize <math> f_0(x) + \frac{1}{t}\phi(x)</math><br><br />
st <math> Ax=b</math><br><br />
<br />
<br />
Given strictly feasible <math>x, t:=t^0>0,\mu >1, \epsilon<0</math><br><br />
'''repeat'''<br><br />
1. Compute <math>x^*(t)</math> by minimizing <math>tf_0 + \phi </math> subject to <math>Ax = b</math>, starting at x.<br><br />
2. Update <math>x:=x^*(t)</math>.<br><br />
3. Quit if <math>\frac{m}{t} \leq \epsilon</math>, else<br><br />
4. Increase <math>t:=\mu t</math>[3]<br />
<br />
=Primal-Dual IP Algorithm=<br />
The primal-dual interior-point method can easily be understood by using the simplest NLP problem; one with only inequality constraints. <br />
Consider the following:<br />
<br />
minimize <math>f(x)</math> <br />
s.t. <math>g(x) \ge 0~~ x \in \mathbb{R}^n, g(x) \in \mathbb{R}^m~~~~~~(1)</math>.<br />
=Example=<br />
<br />
=Conclusion=<br />
<br />
<br />
<br />
== Sources ==<br />
1. R.J. Vanderbei, Linear Programming: Foundations and Extensions (Chp 17-22). Springer, 2008.<br><br />
2. J. Nocedal, S. J. Wright, Numerical optimization (Chp 14). Springer, 1999. <br><br />
3. S. Boyd, L. Vandenberghe, Convex Optimization (Chp 11). Cambridge University Press, 2009</div>Voitekhttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_LPInterior-point method for LP2014-05-25T22:36:13Z<p>Voitek: /* Barrier Method Algorithm */</p>
<hr />
<div><br />
Authors: John Plaxco, Alex Valdes, Wojciech Stojko. (ChE 345 Spring 2014) <br><br />
Steward: Dajun Yue, Fengqi You<br><br />
Date Presented: May 25, 2014 <br><br />
=Introduction and Uses=<br />
Interior point methods are a type of algorithm that are used in solving both linear and nonlinear convex optimization problems that contain inequalities as constraints. The LP Interior-Point method relies on having a linear programming model with the objective function and all constraints being continuous and twice continuously differentiable. In general, a problem is assumed to be strictly feasible, and will have a dual optimal that will satisfy Karush-Kuhn-Tucker (KKT) constraints described below. The problem is solved (assuming there IS a solution) either by iteratively solving for KKT conditions or to the original problem with equality instead of inequality constraints, and then applying Newton's method to these conditions.<br />
<br />
Interior point methods came about from a desire for algorithms with better theoretical bases than the simplex method. While the two strategies are similar in a few ways, the interior point methods involve relatively expensive (in terms of computing) iterations that quickly close in on a solution, while the simplex method involves usually requires many more inexpensive iterations. From a geometric standpoint, interior point methods approach a solution from the interior or exterior of the feasible region, but are never on the boundary.[1]<br />
<br />
There are two important interior point algorithms: the barrier method and primal-dual IP method. The primal-dual method is usually preferred due to its efficiency and accuracy. Major differences between the two methods are as follows. There is only one loop/iteration in primal-dual because there is no distinction between outer and inner iterations as with the barrier method. In primal-dual, the primal and dual iterates do not have to be feasible.[3]<br />
<br />
=Barrier Method Algorithm=<br />
For the barrier method algorithm, there a few approximations that must be made. Given a problem in the form of<br><br />
Minimize <math>f_0(x)</math><br><br />
subject to <math> f_i(x) \leq 0</math><br><br />
<math>Ax=b</math><br><br />
we must reformulate it to implicitly include the inequalities in the objective function. We can do this by creating a function that greatly increases the objective if a constraint is not met. Our conditions then changes to<br><br />
minimize <math> f_0(x) + \sum_i^m I_-(f_i(x))</math><br><br />
st <math> Ax=b</math><br><br />
where <math> I_-(x)=\left|\begin{array}{cc}0& x \leq 0\\ \infty & x > 0\end{array}\right|</math><br><br><br />
This problem, however, is not continuous. A modification can be made by approximating <math>I_-(x)</math> as a logarithm log(-x), which approaches infinity when x approaches 0 as we want. We then put the logarithm over a variable that sets a level of accuracy for the approximation we make. Here we will call that variable t. We define<br><br />
<math>\phi(x)=-\sum_i^mlog(-f_i(x))</math><br><br />
which blows up if any of our constraints are violated. Our LP problem now becomes<br><br />
minimize <math> f_0(x) + \frac{1}{t}\phi(x)</math><br><br />
st <math> Ax=b</math><br><br />
<br />
<br />
Given strictly feasible <math>x, t:=t^0>0,\mu >1, \epsilon<0</math><br><br />
'''repeat'''<br><br />
1. Compute <math>x^*(t)</math> by minimizing <math>tf_0 + \phi </math> subject to <math>Ax = b</math>, starting at x.<br><br />
2. Update <math>x:=x^*(t)</math>.<br><br />
3. Quit if <math>\frac{m}{t} \leq \epsilon</math>, else<br><br />
4. Increase <math>t:=\mu t</math><br />
<br />
=Primal-Dual IP Algorithm=<br />
The primal-dual interior-point method can easily be understood by using the simplest NLP problem; one with only inequality constraints. <br />
Consider the following:<br />
<br />
minimize <math>f(x)</math> <br />
s.t. <math>g(x) \ge 0~~ x \in \mathbb{R}^n, g(x) \in \mathbb{R}^m~~~~~~(1)</math>.<br />
=Example=<br />
<br />
=Conclusion=<br />
<br />
<br />
<br />
== Sources ==<br />
1. R.J. Vanderbei, Linear Programming: Foundations and Extensions (Chp 17-22). Springer, 2008.<br><br />
2. J. Nocedal, S. J. Wright, Numerical optimization (Chp 14). Springer, 1999. <br><br />
3. S. Boyd, L. Vandenberghe, Convex Optimization (Chp 11). Cambridge University Press, 2009</div>Voitekhttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_LPInterior-point method for LP2014-05-25T22:34:12Z<p>Voitek: /* Barrier Method Algorithm */</p>
<hr />
<div><br />
Authors: John Plaxco, Alex Valdes, Wojciech Stojko. (ChE 345 Spring 2014) <br><br />
Steward: Dajun Yue, Fengqi You<br><br />
Date Presented: May 25, 2014 <br><br />
=Introduction and Uses=<br />
Interior point methods are a type of algorithm that are used in solving both linear and nonlinear convex optimization problems that contain inequalities as constraints. The LP Interior-Point method relies on having a linear programming model with the objective function and all constraints being continuous and twice continuously differentiable. In general, a problem is assumed to be strictly feasible, and will have a dual optimal that will satisfy Karush-Kuhn-Tucker (KKT) constraints described below. The problem is solved (assuming there IS a solution) either by iteratively solving for KKT conditions or to the original problem with equality instead of inequality constraints, and then applying Newton's method to these conditions.<br />
<br />
Interior point methods came about from a desire for algorithms with better theoretical bases than the simplex method. While the two strategies are similar in a few ways, the interior point methods involve relatively expensive (in terms of computing) iterations that quickly close in on a solution, while the simplex method involves usually requires many more inexpensive iterations. From a geometric standpoint, interior point methods approach a solution from the interior or exterior of the feasible region, but are never on the boundary.[1]<br />
<br />
There are two important interior point algorithms: the barrier method and primal-dual IP method. The primal-dual method is usually preferred due to its efficiency and accuracy. Major differences between the two methods are as follows. There is only one loop/iteration in primal-dual because there is no distinction between outer and inner iterations as with the barrier method. In primal-dual, the primal and dual iterates do not have to be feasible.[3]<br />
<br />
=Barrier Method Algorithm=<br />
For the barrier method algorithm, there a few approximations that must be made. Given a problem in the form of<br><br />
Minimize <math>f_0(x)</math><br><br />
subject to <math> f_i(x) \leq 0</math><br><br />
<math>Ax=b</math><br><br />
we must reformulate it to implicitly include the inequalities in the objective function. We can do this by creating a function that greatly increases the objective if a constraint is not met. Our conditions then changes to<br><br />
minimize <math> f_0(x) + \sum_i^m I_-(f_i(x))</math><br><br />
st <math> Ax=b</math><br><br />
where <math> I_-(x)=\left|\begin{array}{cc}0& x \leq 0\\ \infty & x > 0\end{array}\right|</math><br><br><br />
This problem, however, is not continuous. A modification can be made by approximating <math>I_-(x)</math> as a logarithm log(-x), which approaches infinity when x approaches 0 as we want. We then put the logarithm over a variable that sets a level of accuracy for the approximation we make. Here we will call that variable t. We define<br><br />
<math>\phi(x)=-\sum_i^mlog(-f_i(x))</math><br><br />
which blows up if any of our constraints are violated.<br />
<br />
<br />
Given strictly feasible <math>x, t:=t^0>0,\mu >1, \epsilon<0</math><br><br />
'''repeat'''<br><br />
1. Compute <math>x^*(t)</math> by minimizing <math>tf_0 + \phi </math> subject to <math>Ax = b</math>, starting at x.<br><br />
2. Update <math>x:=x^*(t)</math>.<br><br />
3. Quit if <math>\frac{m}{t} \leq \epsilon</math>, else<br><br />
4. Increase <math>t:=\mu t</math><br />
<br />
=Primal-Dual IP Algorithm=<br />
The primal-dual interior-point method can easily be understood by using the simplest NLP problem; one with only inequality constraints. <br />
Consider the following:<br />
<br />
minimize <math>f(x)</math> <br />
s.t. <math>g(x) \ge 0~~ x \in \mathbb{R}^n, g(x) \in \mathbb{R}^m~~~~~~(1)</math>.<br />
=Example=<br />
<br />
=Conclusion=<br />
<br />
<br />
<br />
== Sources ==<br />
1. R.J. Vanderbei, Linear Programming: Foundations and Extensions (Chp 17-22). Springer, 2008.<br><br />
2. J. Nocedal, S. J. Wright, Numerical optimization (Chp 14). Springer, 1999. <br><br />
3. S. Boyd, L. Vandenberghe, Convex Optimization (Chp 11). Cambridge University Press, 2009</div>Voitekhttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_LPInterior-point method for LP2014-05-25T22:31:53Z<p>Voitek: /* Barrier Method Algorithm */</p>
<hr />
<div><br />
Authors: John Plaxco, Alex Valdes, Wojciech Stojko. (ChE 345 Spring 2014) <br><br />
Steward: Dajun Yue, Fengqi You<br><br />
Date Presented: May 25, 2014 <br><br />
=Introduction and Uses=<br />
Interior point methods are a type of algorithm that are used in solving both linear and nonlinear convex optimization problems that contain inequalities as constraints. The LP Interior-Point method relies on having a linear programming model with the objective function and all constraints being continuous and twice continuously differentiable. In general, a problem is assumed to be strictly feasible, and will have a dual optimal that will satisfy Karush-Kuhn-Tucker (KKT) constraints described below. The problem is solved (assuming there IS a solution) either by iteratively solving for KKT conditions or to the original problem with equality instead of inequality constraints, and then applying Newton's method to these conditions.<br />
<br />
Interior point methods came about from a desire for algorithms with better theoretical bases than the simplex method. While the two strategies are similar in a few ways, the interior point methods involve relatively expensive (in terms of computing) iterations that quickly close in on a solution, while the simplex method involves usually requires many more inexpensive iterations. From a geometric standpoint, interior point methods approach a solution from the interior or exterior of the feasible region, but are never on the boundary.[1]<br />
<br />
There are two important interior point algorithms: the barrier method and primal-dual IP method. The primal-dual method is usually preferred due to its efficiency and accuracy. Major differences between the two methods are as follows. There is only one loop/iteration in primal-dual because there is no distinction between outer and inner iterations as with the barrier method. In primal-dual, the primal and dual iterates do not have to be feasible.[3]<br />
<br />
=Barrier Method Algorithm=<br />
For the barrier method algorithm, there a few approximations that must be made. Given a problem in the form of<br><br />
Minimize <math>f_0(x)</math><br><br />
subject to <math> f_i(x) \leq 0</math><br><br />
<math>Ax=b</math><br><br />
we must reformulate it to implicitly include the inequalities in the objective function. We can do this by creating a function that greatly increases the objective if a constraint is not met. Our conditions then changes to<br><br />
minimize <math> f_0(x) + \sum_i^m I_-(f_i(x))</math><br><br />
st <math> Ax=b</math><br><br />
where <math> I_-(x)=\left|\begin{array}{cc}0& x \leq 0\\ \infty & x > 0\end{array}\right|</math><br><br><br />
This problem, however, is not continuous. A modification can be made by approximating <math>I_-(x)</math> as a logarithm log(-x), which approaches infinity when x approaches 0 as we want. <br />
<br />
<br />
Given strictly feasible <math>x, t:=t^0>0,\mu >1, \epsilon<0</math><br><br />
'''repeat'''<br><br />
1. Compute <math>x^*(t)</math> by minimizing <math>tf_0 + \phi </math> subject to <math>Ax = b</math>, starting at x.<br><br />
2. Update <math>x:=x^*(t)</math>.<br><br />
3. Quit if <math>\frac{m}{t} \leq \epsilon</math>, else<br><br />
4. Increase <math>t:=\mu t</math><br />
<br />
=Primal-Dual IP Algorithm=<br />
The primal-dual interior-point method can easily be understood by using the simplest NLP problem; one with only inequality constraints. <br />
Consider the following:<br />
<br />
minimize <math>f(x)</math> <br />
s.t. <math>g(x) \ge 0~~ x \in \mathbb{R}^n, g(x) \in \mathbb{R}^m~~~~~~(1)</math>.<br />
=Example=<br />
<br />
=Conclusion=<br />
<br />
<br />
<br />
== Sources ==<br />
1. R.J. Vanderbei, Linear Programming: Foundations and Extensions (Chp 17-22). Springer, 2008.<br><br />
2. J. Nocedal, S. J. Wright, Numerical optimization (Chp 14). Springer, 1999. <br><br />
3. S. Boyd, L. Vandenberghe, Convex Optimization (Chp 11). Cambridge University Press, 2009</div>Voitekhttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_LPInterior-point method for LP2014-05-25T22:27:08Z<p>Voitek: /* Barrier Method Algorithm */</p>
<hr />
<div><br />
Authors: John Plaxco, Alex Valdes, Wojciech Stojko. (ChE 345 Spring 2014) <br><br />
Steward: Dajun Yue, Fengqi You<br><br />
Date Presented: May 25, 2014 <br><br />
=Introduction and Uses=<br />
Interior point methods are a type of algorithm that are used in solving both linear and nonlinear convex optimization problems that contain inequalities as constraints. The LP Interior-Point method relies on having a linear programming model with the objective function and all constraints being continuous and twice continuously differentiable. In general, a problem is assumed to be strictly feasible, and will have a dual optimal that will satisfy Karush-Kuhn-Tucker (KKT) constraints described below. The problem is solved (assuming there IS a solution) either by iteratively solving for KKT conditions or to the original problem with equality instead of inequality constraints, and then applying Newton's method to these conditions.<br />
<br />
Interior point methods came about from a desire for algorithms with better theoretical bases than the simplex method. While the two strategies are similar in a few ways, the interior point methods involve relatively expensive (in terms of computing) iterations that quickly close in on a solution, while the simplex method involves usually requires many more inexpensive iterations. From a geometric standpoint, interior point methods approach a solution from the interior or exterior of the feasible region, but are never on the boundary.[1]<br />
<br />
There are two important interior point algorithms: the barrier method and primal-dual IP method. The primal-dual method is usually preferred due to its efficiency and accuracy. Major differences between the two methods are as follows. There is only one loop/iteration in primal-dual because there is no distinction between outer and inner iterations as with the barrier method. In primal-dual, the primal and dual iterates do not have to be feasible.[3]<br />
<br />
=Barrier Method Algorithm=<br />
For the barrier method algorithm, there a few approximations that must be made. Given a problem in the form of<br><br />
Minimize <math>f_0(x)</math><br><br />
subject to <math> f_i(x) \leq 0</math><br><br />
<math>Ax=b</math><br><br />
we must reformulate it to implicitly include the inequalities in the objective function. We can do this by creating a function that greatly increases the objective if a constraint is not met. Our conditions then changes to<br><br />
minimize <math> f_0(x) + \sum_i^m I_-(f_i(x))</math><br><br />
st <math> Ax=b</math><br><br />
where <math> I_-(x)=\left|\begin{array}{cc}0& x \leq 0\\ \infty & x > 0\end{array}\right|</math><br />
<br />
<br />
Given strictly feasible <math>x, t:=t^0>0,\mu >1, \epsilon<0</math><br><br />
'''repeat'''<br><br />
1. Compute <math>x^*(t)</math> by minimizing <math>tf_0 + \phi </math> subject to <math>Ax = b</math>, starting at x.<br><br />
2. Update <math>x:=x^*(t)</math>.<br><br />
3. Quit if <math>\frac{m}{t} \leq \epsilon</math>, else<br><br />
4. Increase <math>t:=\mu t</math><br />
<br />
=Primal-Dual IP Algorithm=<br />
The primal-dual interior-point method can easily be understood by using the simplest NLP problem; one with only inequality constraints. <br />
Consider the following:<br />
<br />
minimize <math>f(x)</math> <br />
s.t. <math>g(x) \ge 0~~ x \in \mathbb{R}^n, g(x) \in \mathbb{R}^m~~~~~~(1)</math>.<br />
=Example=<br />
<br />
=Conclusion=<br />
<br />
<br />
<br />
== Sources ==<br />
1. R.J. Vanderbei, Linear Programming: Foundations and Extensions (Chp 17-22). Springer, 2008.<br><br />
2. J. Nocedal, S. J. Wright, Numerical optimization (Chp 14). Springer, 1999. <br><br />
3. S. Boyd, L. Vandenberghe, Convex Optimization (Chp 11). Cambridge University Press, 2009</div>Voitekhttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_LPInterior-point method for LP2014-05-25T22:26:43Z<p>Voitek: /* Barrier Method Algorithm */</p>
<hr />
<div><br />
Authors: John Plaxco, Alex Valdes, Wojciech Stojko. (ChE 345 Spring 2014) <br><br />
Steward: Dajun Yue, Fengqi You<br><br />
Date Presented: May 25, 2014 <br><br />
=Introduction and Uses=<br />
Interior point methods are a type of algorithm that are used in solving both linear and nonlinear convex optimization problems that contain inequalities as constraints. The LP Interior-Point method relies on having a linear programming model with the objective function and all constraints being continuous and twice continuously differentiable. In general, a problem is assumed to be strictly feasible, and will have a dual optimal that will satisfy Karush-Kuhn-Tucker (KKT) constraints described below. The problem is solved (assuming there IS a solution) either by iteratively solving for KKT conditions or to the original problem with equality instead of inequality constraints, and then applying Newton's method to these conditions.<br />
<br />
Interior point methods came about from a desire for algorithms with better theoretical bases than the simplex method. While the two strategies are similar in a few ways, the interior point methods involve relatively expensive (in terms of computing) iterations that quickly close in on a solution, while the simplex method involves usually requires many more inexpensive iterations. From a geometric standpoint, interior point methods approach a solution from the interior or exterior of the feasible region, but are never on the boundary.[1]<br />
<br />
There are two important interior point algorithms: the barrier method and primal-dual IP method. The primal-dual method is usually preferred due to its efficiency and accuracy. Major differences between the two methods are as follows. There is only one loop/iteration in primal-dual because there is no distinction between outer and inner iterations as with the barrier method. In primal-dual, the primal and dual iterates do not have to be feasible.[3]<br />
<br />
=Barrier Method Algorithm=<br />
For the barrier method algorithm, there a few approximations that must be made. Given a problem in the form of<br><br />
Minimize <math>f_0(x)</math><br><br />
subject to <math> f_i(x) \leq 0</math><br><br />
<math>Ax=b</math><br><br />
we must reformulate it to implicitly include the inequalities in the objective function. We can do this by creating a function that greatly increases the objective if a constraint is not met. Our conditions then change to<br><br />
minimize <math> f_0(x) + \sum_i^m I_-(f_i(x))</math><br><br />
st <math> Ax=b</math><br><br />
where <math> I_-(x)=\left|\begin{array}{cc}0& x \leq 0\end{array}\right|</math><br />
<br />
<br />
Given strictly feasible <math>x, t:=t^0>0,\mu >1, \epsilon<0</math><br><br />
'''repeat'''<br><br />
1. Compute <math>x^*(t)</math> by minimizing <math>tf_0 + \phi </math> subject to <math>Ax = b</math>, starting at x.<br><br />
2. Update <math>x:=x^*(t)</math>.<br><br />
3. Quit if <math>\frac{m}{t} \leq \epsilon</math>, else<br><br />
4. Increase <math>t:=\mu t</math><br />
<br />
=Primal-Dual IP Algorithm=<br />
The primal-dual interior-point method can easily be understood by using the simplest NLP problem; one with only inequality constraints. <br />
Consider the following:<br />
<br />
minimize <math>f(x)</math> <br />
s.t. <math>g(x) \ge 0~~ x \in \mathbb{R}^n, g(x) \in \mathbb{R}^m~~~~~~(1)</math>.<br />
=Example=<br />
<br />
=Conclusion=<br />
<br />
<br />
<br />
== Sources ==<br />
1. R.J. Vanderbei, Linear Programming: Foundations and Extensions (Chp 17-22). Springer, 2008.<br><br />
2. J. Nocedal, S. J. Wright, Numerical optimization (Chp 14). Springer, 1999. <br><br />
3. S. Boyd, L. Vandenberghe, Convex Optimization (Chp 11). Cambridge University Press, 2009</div>Voitekhttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_LPInterior-point method for LP2014-05-25T22:25:16Z<p>Voitek: /* Barrier Method Algorithm */</p>
<hr />
<div><br />
Authors: John Plaxco, Alex Valdes, Wojciech Stojko. (ChE 345 Spring 2014) <br><br />
Steward: Dajun Yue, Fengqi You<br><br />
Date Presented: May 25, 2014 <br><br />
=Introduction and Uses=<br />
Interior point methods are a type of algorithm that are used in solving both linear and nonlinear convex optimization problems that contain inequalities as constraints. The LP Interior-Point method relies on having a linear programming model with the objective function and all constraints being continuous and twice continuously differentiable. In general, a problem is assumed to be strictly feasible, and will have a dual optimal that will satisfy Karush-Kuhn-Tucker (KKT) constraints described below. The problem is solved (assuming there IS a solution) either by iteratively solving for KKT conditions or to the original problem with equality instead of inequality constraints, and then applying Newton's method to these conditions.<br />
<br />
Interior point methods came about from a desire for algorithms with better theoretical bases than the simplex method. While the two strategies are similar in a few ways, the interior point methods involve relatively expensive (in terms of computing) iterations that quickly close in on a solution, while the simplex method involves usually requires many more inexpensive iterations. From a geometric standpoint, interior point methods approach a solution from the interior or exterior of the feasible region, but are never on the boundary.[1]<br />
<br />
There are two important interior point algorithms: the barrier method and primal-dual IP method. The primal-dual method is usually preferred due to its efficiency and accuracy. Major differences between the two methods are as follows. There is only one loop/iteration in primal-dual because there is no distinction between outer and inner iterations as with the barrier method. In primal-dual, the primal and dual iterates do not have to be feasible.[3]<br />
<br />
=Barrier Method Algorithm=<br />
For the barrier method algorithm, there a few approximations that must be made. Given a problem in the form of<br><br />
Minimize <math>f_0(x)</math><br><br />
subject to <math> f_i(x) \leq 0</math><br><br />
<math>Ax=b</math><br><br />
we must reformulate it to implicitly include the inequalities in the objective function. We can do this by creating a function that greatly increases the objective if a constraint is not met. Our conditions then change to<br><br />
minimize <math> f_0(x) + \sum_i^m I_-(f_i(x))</math><br><br />
<br />
<br />
Given strictly feasible <math>x, t:=t^0>0,\mu >1, \epsilon<0</math><br><br />
'''repeat'''<br><br />
1. Compute <math>x^*(t)</math> by minimizing <math>tf_0 + \phi </math> subject to <math>Ax = b</math>, starting at x.<br><br />
2. Update <math>x:=x^*(t)</math>.<br><br />
3. Quit if <math>\frac{m}{t} \leq \epsilon</math>, else<br><br />
4. Increase <math>t:=\mu t</math><br />
<br />
=Primal-Dual IP Algorithm=<br />
The primal-dual interior-point method can easily be understood by using the simplest NLP problem; one with only inequality constraints. <br />
Consider the following:<br />
<br />
minimize <math>f(x)</math> <br />
s.t. <math>g(x) \ge 0~~ x \in \mathbb{R}^n, g(x) \in \mathbb{R}^m~~~~~~(1)</math>.<br />
=Example=<br />
<br />
=Conclusion=<br />
<br />
<br />
<br />
== Sources ==<br />
1. R.J. Vanderbei, Linear Programming: Foundations and Extensions (Chp 17-22). Springer, 2008.<br><br />
2. J. Nocedal, S. J. Wright, Numerical optimization (Chp 14). Springer, 1999. <br><br />
3. S. Boyd, L. Vandenberghe, Convex Optimization (Chp 11). Cambridge University Press, 2009</div>Voitekhttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_LPInterior-point method for LP2014-05-25T22:25:02Z<p>Voitek: /* Barrier Method Algorithm */</p>
<hr />
<div><br />
Authors: John Plaxco, Alex Valdes, Wojciech Stojko. (ChE 345 Spring 2014) <br><br />
Steward: Dajun Yue, Fengqi You<br><br />
Date Presented: May 25, 2014 <br><br />
=Introduction and Uses=<br />
Interior point methods are a type of algorithm that are used in solving both linear and nonlinear convex optimization problems that contain inequalities as constraints. The LP Interior-Point method relies on having a linear programming model with the objective function and all constraints being continuous and twice continuously differentiable. In general, a problem is assumed to be strictly feasible, and will have a dual optimal that will satisfy Karush-Kuhn-Tucker (KKT) constraints described below. The problem is solved (assuming there IS a solution) either by iteratively solving for KKT conditions or to the original problem with equality instead of inequality constraints, and then applying Newton's method to these conditions.<br />
<br />
Interior point methods came about from a desire for algorithms with better theoretical bases than the simplex method. While the two strategies are similar in a few ways, the interior point methods involve relatively expensive (in terms of computing) iterations that quickly close in on a solution, while the simplex method involves usually requires many more inexpensive iterations. From a geometric standpoint, interior point methods approach a solution from the interior or exterior of the feasible region, but are never on the boundary.[1]<br />
<br />
There are two important interior point algorithms: the barrier method and primal-dual IP method. The primal-dual method is usually preferred due to its efficiency and accuracy. Major differences between the two methods are as follows. There is only one loop/iteration in primal-dual because there is no distinction between outer and inner iterations as with the barrier method. In primal-dual, the primal and dual iterates do not have to be feasible.[3]<br />
<br />
=Barrier Method Algorithm=<br />
For the barrier method algorithm, there a few approximations that must be made. Given a problem in the form of<br><br />
Minimize <math>f_0(x)</math><br><br />
subject to <math> f_i(x) \leq 0</math><br><br />
<math>Ax=b</math><br><br />
we must reformulate it to implicitly include the inequalities in the objective function. We can do this by creating a function that greatly increases the objective if a constraint is not met. Our conditions then change to<br><br />
minimize <math> f_0(x) + \sim_i^m I_-(f_i(x))</math><br><br />
<br />
<br />
Given strictly feasible <math>x, t:=t^0>0,\mu >1, \epsilon<0</math><br><br />
'''repeat'''<br><br />
1. Compute <math>x^*(t)</math> by minimizing <math>tf_0 + \phi </math> subject to <math>Ax = b</math>, starting at x.<br><br />
2. Update <math>x:=x^*(t)</math>.<br><br />
3. Quit if <math>\frac{m}{t} \leq \epsilon</math>, else<br><br />
4. Increase <math>t:=\mu t</math><br />
<br />
=Primal-Dual IP Algorithm=<br />
The primal-dual interior-point method can easily be understood by using the simplest NLP problem; one with only inequality constraints. <br />
Consider the following:<br />
<br />
minimize <math>f(x)</math> <br />
s.t. <math>g(x) \ge 0~~ x \in \mathbb{R}^n, g(x) \in \mathbb{R}^m~~~~~~(1)</math>.<br />
=Example=<br />
<br />
=Conclusion=<br />
<br />
<br />
<br />
== Sources ==<br />
1. R.J. Vanderbei, Linear Programming: Foundations and Extensions (Chp 17-22). Springer, 2008.<br><br />
2. J. Nocedal, S. J. Wright, Numerical optimization (Chp 14). Springer, 1999. <br><br />
3. S. Boyd, L. Vandenberghe, Convex Optimization (Chp 11). Cambridge University Press, 2009</div>Voitekhttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_LPInterior-point method for LP2014-05-25T22:23:38Z<p>Voitek: /* Barrier Method Algorithm */</p>
<hr />
<div><br />
Authors: John Plaxco, Alex Valdes, Wojciech Stojko. (ChE 345 Spring 2014) <br><br />
Steward: Dajun Yue, Fengqi You<br><br />
Date Presented: May 25, 2014 <br><br />
=Introduction and Uses=<br />
Interior point methods are a type of algorithm that are used in solving both linear and nonlinear convex optimization problems that contain inequalities as constraints. The LP Interior-Point method relies on having a linear programming model with the objective function and all constraints being continuous and twice continuously differentiable. In general, a problem is assumed to be strictly feasible, and will have a dual optimal that will satisfy Karush-Kuhn-Tucker (KKT) constraints described below. The problem is solved (assuming there IS a solution) either by iteratively solving for KKT conditions or to the original problem with equality instead of inequality constraints, and then applying Newton's method to these conditions.<br />
<br />
Interior point methods came about from a desire for algorithms with better theoretical bases than the simplex method. While the two strategies are similar in a few ways, the interior point methods involve relatively expensive (in terms of computing) iterations that quickly close in on a solution, while the simplex method involves usually requires many more inexpensive iterations. From a geometric standpoint, interior point methods approach a solution from the interior or exterior of the feasible region, but are never on the boundary.[1]<br />
<br />
There are two important interior point algorithms: the barrier method and primal-dual IP method. The primal-dual method is usually preferred due to its efficiency and accuracy. Major differences between the two methods are as follows. There is only one loop/iteration in primal-dual because there is no distinction between outer and inner iterations as with the barrier method. In primal-dual, the primal and dual iterates do not have to be feasible.[3]<br />
<br />
=Barrier Method Algorithm=<br />
For the barrier method algorithm, there a few approximations that must be made. Given a problem in the form of<br><br />
Minimize <math>f_0(x)</math><br><br />
subject to <math> f_i(x) \leq 0</math><br><br />
<math>Ax=b</math><br><br />
we must reformulate it to implicitly include the inequalities in the objective function. We can do this by creating a function that greatly increases the objective if a constraint is not met. Our conditions then change to<br><br />
<br />
<br />
Given strictly feasible <math>x, t:=t^0>0,\mu >1, \epsilon<0</math><br><br />
'''repeat'''<br><br />
1. Compute <math>x^*(t)</math> by minimizing <math>tf_0 + \phi </math> subject to <math>Ax = b</math>, starting at x.<br><br />
2. Update <math>x:=x^*(t)</math>.<br><br />
3. Quit if <math>\frac{m}{t} \leq \epsilon</math>, else<br><br />
4. Increase <math>t:=\mu t</math><br />
<br />
=Primal-Dual IP Algorithm=<br />
The primal-dual interior-point method can easily be understood by using the simplest NLP problem; one with only inequality constraints. <br />
Consider the following:<br />
<br />
minimize <math>f(x)</math> <br />
s.t. <math>g(x) \ge 0~~ x \in \mathbb{R}^n, g(x) \in \mathbb{R}^m~~~~~~(1)</math>.<br />
=Example=<br />
<br />
=Conclusion=<br />
<br />
<br />
<br />
== Sources ==<br />
1. R.J. Vanderbei, Linear Programming: Foundations and Extensions (Chp 17-22). Springer, 2008.<br><br />
2. J. Nocedal, S. J. Wright, Numerical optimization (Chp 14). Springer, 1999. <br><br />
3. S. Boyd, L. Vandenberghe, Convex Optimization (Chp 11). Cambridge University Press, 2009</div>Voitekhttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_LPInterior-point method for LP2014-05-25T22:07:43Z<p>Voitek: /* Barrier Method Algorith */</p>
<hr />
<div><br />
Authors: John Plaxco, Alex Valdes, Wojciech Stojko. (ChE 345 Spring 2014) <br><br />
Steward: Dajun Yue, Fengqi You<br><br />
Date Presented: May 25, 2014 <br><br />
=Introduction and Uses=<br />
Interior point methods are a type of algorithm that are used in solving both linear and nonlinear convex optimization problems that contain inequalities as constraints. The LP Interior-Point method relies on having a linear programming model with the objective function and all constraints being continuous and twice continuously differentiable. In general, a problem is assumed to be strictly feasible, and will have a dual optimal that will satisfy Karush-Kuhn-Tucker (KKT) constraints described below. The problem is solved (assuming there IS a solution) either by iteratively solving for KKT conditions or to the original problem with equality instead of inequality constraints, and then applying Newton's method to these conditions.<br />
<br />
Interior point methods came about from a desire for algorithms with better theoretical bases than the simplex method. While the two strategies are similar in a few ways, the interior point methods involve relatively expensive (in terms of computing) iterations that quickly close in on a solution, while the simplex method involves usually requires many more inexpensive iterations. From a geometric standpoint, interior point methods approach a solution from the interior or exterior of the feasible region, but are never on the boundary.[1]<br />
<br />
There are two important interior point algorithms: the barrier method and primal-dual IP method. The primal-dual method is usually preferred due to its efficiency and accuracy. Major differences between the two methods are as follows. There is only one loop/iteration in primal-dual because there is no distinction between outer and inner iterations as with the barrier method. In primal-dual, the primal and dual iterates do not have to be feasible.[3]<br />
<br />
=Barrier Method Algorithm=<br />
Given strictly feasible <math>x, t:=t^0>0,\mu >1, \epsilon<0</math><br><br />
'''repeat'''<br><br />
1. Compute <math>x^*(t)</math> by minimizing <math>tf_0 + \phi </math> subject to <math>Ax = b</math>, starting at x.<br><br />
2. Update <math>x:=x^*(t)</math>.<br><br />
3. Quit if <math>\frac{m}{t} \leq \epsilon</math>, else<br><br />
4. Increase <math>t:=\mu t</math><br />
<br />
=Primal-Dual IP Algorithm=<br />
The primal-dual interior-point method can easily be understood by using the simplest NLP problem; one with only inequality constraints. <br />
Consider the following:<br />
<br />
minimize <math>f(x)</math> <br />
s.t. <math>g(x) \ge 0~~ x \in \mathbb{R}^n, g(x) \in \mathbb{R}^m~~~~~~(1)</math>.<br />
=Example=<br />
<br />
=Conclusion=<br />
<br />
<br />
<br />
== Sources ==<br />
1. R.J. Vanderbei, Linear Programming: Foundations and Extensions (Chp 17-22). Springer, 2008.<br><br />
2. J. Nocedal, S. J. Wright, Numerical optimization (Chp 14). Springer, 1999. <br><br />
3. S. Boyd, L. Vandenberghe, Convex Optimization (Chp 11). Cambridge University Press, 2009</div>Voitekhttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_LPInterior-point method for LP2014-05-25T22:07:35Z<p>Voitek: /* Introduction and Uses */</p>
<hr />
<div><br />
Authors: John Plaxco, Alex Valdes, Wojciech Stojko. (ChE 345 Spring 2014) <br><br />
Steward: Dajun Yue, Fengqi You<br><br />
Date Presented: May 25, 2014 <br><br />
=Introduction and Uses=<br />
Interior point methods are a type of algorithm that are used in solving both linear and nonlinear convex optimization problems that contain inequalities as constraints. The LP Interior-Point method relies on having a linear programming model with the objective function and all constraints being continuous and twice continuously differentiable. In general, a problem is assumed to be strictly feasible, and will have a dual optimal that will satisfy Karush-Kuhn-Tucker (KKT) constraints described below. The problem is solved (assuming there IS a solution) either by iteratively solving for KKT conditions or to the original problem with equality instead of inequality constraints, and then applying Newton's method to these conditions.<br />
<br />
Interior point methods came about from a desire for algorithms with better theoretical bases than the simplex method. While the two strategies are similar in a few ways, the interior point methods involve relatively expensive (in terms of computing) iterations that quickly close in on a solution, while the simplex method involves usually requires many more inexpensive iterations. From a geometric standpoint, interior point methods approach a solution from the interior or exterior of the feasible region, but are never on the boundary.[1]<br />
<br />
There are two important interior point algorithms: the barrier method and primal-dual IP method. The primal-dual method is usually preferred due to its efficiency and accuracy. Major differences between the two methods are as follows. There is only one loop/iteration in primal-dual because there is no distinction between outer and inner iterations as with the barrier method. In primal-dual, the primal and dual iterates do not have to be feasible.[3]<br />
<br />
=Barrier Method Algorith=<br />
Given strictly feasible <math>x, t:=t^0>0,\mu >1, \epsilon<0</math><br><br />
'''repeat'''<br><br />
1. Compute <math>x^*(t)</math> by minimizing <math>tf_0 + \phi </math> subject to <math>Ax = b</math>, starting at x.<br><br />
2. Update <math>x:=x^*(t)</math>.<br><br />
3. Quit if <math>\frac{m}{t} \leq \epsilon</math>, else<br><br />
4. Increase <math>t:=\mu t</math><br />
<br />
=Primal-Dual IP Algorithm=<br />
The primal-dual interior-point method can easily be understood by using the simplest NLP problem; one with only inequality constraints. <br />
Consider the following:<br />
<br />
minimize <math>f(x)</math> <br />
s.t. <math>g(x) \ge 0~~ x \in \mathbb{R}^n, g(x) \in \mathbb{R}^m~~~~~~(1)</math>.<br />
=Example=<br />
<br />
=Conclusion=<br />
<br />
<br />
<br />
== Sources ==<br />
1. R.J. Vanderbei, Linear Programming: Foundations and Extensions (Chp 17-22). Springer, 2008.<br><br />
2. J. Nocedal, S. J. Wright, Numerical optimization (Chp 14). Springer, 1999. <br><br />
3. S. Boyd, L. Vandenberghe, Convex Optimization (Chp 11). Cambridge University Press, 2009</div>Voitekhttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_LPInterior-point method for LP2014-05-25T22:06:03Z<p>Voitek: /* Barrier Method Algorithm */</p>
<hr />
<div><br />
Authors: John Plaxco, Alex Valdes, Wojciech Stojko. (ChE 345 Spring 2014) <br><br />
Steward: Dajun Yue, Fengqi You<br><br />
Date Presented: May 25, 2014 <br><br />
=Introduction and Uses=<br />
Interior point methods are a type of algorithm that are used in solving both linear and nonlinear convex optimization problems that contain inequalities as constraints. The LP Interior-Point method relies on having a linear programming model with the objective function and all constraints being continuous and twice continuously differentiable. In general, a problem is assumed to be strictly feasible, and will have a dual optimal that will satisfy Karush-Kuhn-Tucker (KKT) constraints described below. The problem is solved (assuming there IS a solution) either by iteratively solving for KKT conditions or to the original problem with equality instead of inequality constraints, and then applying Newton's method to these conditions.<br />
<br />
Interior point methods came about from a desire for algorithms with better theoretical bases than the simplex method. While the two strategies are similar in a few ways, the interior point methods involve relatively expensive (in terms of computing) iterations that quickly close in on a solution, while the simplex method involves usually requires many more inexpensive iterations. From a geometric standpoint, interior point methods approach a solution from the interior or exterior of the feasible region, but are never on the boundary.[1]<br />
<br />
There are two important interior point algorithms: the barrier method and primal-dual IP method. The primal-dual method is usually preferred due to its efficiency and accuracy. Major differences between the two methods are as follows. There is only one loop/iteration in primal-dual because there is no distinction between outer and inner iterations as with the barrier method. In primal-dual, the primal and dual iterates do not have to be feasible.[3]<br />
<br />
<br />
Given strictly feasible <math>x, t:=t^0>0,\mu >1, \epsilon<0</math><br><br />
'''repeat'''<br><br />
1. Compute <math>x^*(t)</math> by minimizing <math>tf_0 + \phi </math> subject to <math>Ax = b</math>, starting at x.<br><br />
2. Update <math>x:=x^*(t)</math>.<br><br />
3. Quit if <math>\frac{m}{t} \leq \epsilon</math>, else<br><br />
4. Increase <math>t:=\mu t</math><br />
<br />
=Primal-Dual IP Algorithm=<br />
The primal-dual interior-point method can easily be understood by using the simplest NLP problem; one with only inequality constraints. <br />
Consider the following:<br />
<br />
minimize <math>f(x)</math> <br />
s.t. <math>g(x) \ge 0~~ x \in \mathbb{R}^n, g(x) \in \mathbb{R}^m~~~~~~(1)</math>.<br />
=Example=<br />
<br />
=Conclusion=<br />
<br />
<br />
<br />
== Sources ==<br />
1. R.J. Vanderbei, Linear Programming: Foundations and Extensions (Chp 17-22). Springer, 2008.<br><br />
2. J. Nocedal, S. J. Wright, Numerical optimization (Chp 14). Springer, 1999. <br><br />
3. S. Boyd, L. Vandenberghe, Convex Optimization (Chp 11). Cambridge University Press, 2009</div>Voitekhttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_LPInterior-point method for LP2014-05-25T22:05:29Z<p>Voitek: /* Barrier Method Algorithm */</p>
<hr />
<div><br />
Authors: John Plaxco, Alex Valdes, Wojciech Stojko. (ChE 345 Spring 2014) <br><br />
Steward: Dajun Yue, Fengqi You<br><br />
Date Presented: May 25, 2014 <br><br />
=Introduction and Uses=<br />
Interior point methods are a type of algorithm that are used in solving both linear and nonlinear convex optimization problems that contain inequalities as constraints. The LP Interior-Point method relies on having a linear programming model with the objective function and all constraints being continuous and twice continuously differentiable. In general, a problem is assumed to be strictly feasible, and will have a dual optimal that will satisfy Karush-Kuhn-Tucker (KKT) constraints described below. The problem is solved (assuming there IS a solution) either by iteratively solving for KKT conditions or to the original problem with equality instead of inequality constraints, and then applying Newton's method to these conditions.<br />
<br />
Interior point methods came about from a desire for algorithms with better theoretical bases than the simplex method. While the two strategies are similar in a few ways, the interior point methods involve relatively expensive (in terms of computing) iterations that quickly close in on a solution, while the simplex method involves usually requires many more inexpensive iterations. From a geometric standpoint, interior point methods approach a solution from the interior or exterior of the feasible region, but are never on the boundary.[1]<br />
<br />
There are two important interior point algorithms: the barrier method and primal-dual IP method. The primal-dual method is usually preferred due to its efficiency and accuracy. Major differences between the two methods are as follows. There is only one loop/iteration in primal-dual because there is no distinction between outer and inner iterations as with the barrier method. In primal-dual, the primal and dual iterates do not have to be feasible.[3]<br />
<br />
=Barrier Method Algorithm=<br />
Given strictly feasible x, t := t<br />
(0) > 0, µ > 1, tolerance ǫ > 0.<br />
<br />
'''repeat'''<br />
<br />
1. Centering step.<br />
Compute x⋆(t) by minimizing tf0 + φ, subject to Ax = b, starting at x.<br />
<br />
2. Update. x := x⋆(t).<br />
<br />
3. Stopping criterion. quit if m/t < ǫ.<br />
<br />
4. Increase t. t := µt.<br />
<br />
Given strictly feasible <math>x, t:=t^0>0,\mu >1, \epsilon<0</math><br><br />
1. Compute <math>x^*(t)</math> by minimizing <math>tf_0 + \phi </math> subject to <math>Ax = b</math>, starting at x.<br><br />
2. Update <math>x:=x^*(t)</math>.<br><br />
3. Quit if <math>\frac{m}{t} \leq \epsilon</math>, else<br><br />
4. Increase <math>t:=\mu t</math><br />
<br />
=Primal-Dual IP Algorithm=<br />
The primal-dual interior-point method can easily be understood by using the simplest NLP problem; one with only inequality constraints. <br />
Consider the following:<br />
<br />
minimize <math>f(x)</math> <br />
s.t. <math>g(x) \ge 0~~ x \in \mathbb{R}^n, g(x) \in \mathbb{R}^m~~~~~~(1)</math>.<br />
=Example=<br />
<br />
=Conclusion=<br />
<br />
<br />
<br />
== Sources ==<br />
1. R.J. Vanderbei, Linear Programming: Foundations and Extensions (Chp 17-22). Springer, 2008.<br><br />
2. J. Nocedal, S. J. Wright, Numerical optimization (Chp 14). Springer, 1999. <br><br />
3. S. Boyd, L. Vandenberghe, Convex Optimization (Chp 11). Cambridge University Press, 2009</div>Voitekhttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_LPInterior-point method for LP2014-05-25T22:02:28Z<p>Voitek: /* Barrier Method Algorithm */</p>
<hr />
<div><br />
Authors: John Plaxco, Alex Valdes, Wojciech Stojko. (ChE 345 Spring 2014) <br><br />
Steward: Dajun Yue, Fengqi You<br><br />
Date Presented: May 25, 2014 <br><br />
=Introduction and Uses=<br />
Interior point methods are a type of algorithm that are used in solving both linear and nonlinear convex optimization problems that contain inequalities as constraints. The LP Interior-Point method relies on having a linear programming model with the objective function and all constraints being continuous and twice continuously differentiable. In general, a problem is assumed to be strictly feasible, and will have a dual optimal that will satisfy Karush-Kuhn-Tucker (KKT) constraints described below. The problem is solved (assuming there IS a solution) either by iteratively solving for KKT conditions or to the original problem with equality instead of inequality constraints, and then applying Newton's method to these conditions.<br />
<br />
Interior point methods came about from a desire for algorithms with better theoretical bases than the simplex method. While the two strategies are similar in a few ways, the interior point methods involve relatively expensive (in terms of computing) iterations that quickly close in on a solution, while the simplex method involves usually requires many more inexpensive iterations. From a geometric standpoint, interior point methods approach a solution from the interior or exterior of the feasible region, but are never on the boundary.[1]<br />
<br />
There are two important interior point algorithms: the barrier method and primal-dual IP method. The primal-dual method is usually preferred due to its efficiency and accuracy. Major differences between the two methods are as follows. There is only one loop/iteration in primal-dual because there is no distinction between outer and inner iterations as with the barrier method. In primal-dual, the primal and dual iterates do not have to be feasible.[3]<br />
<br />
=Barrier Method Algorithm=<br />
Given strictly feasible x, t := t<br />
(0) > 0, µ > 1, tolerance ǫ > 0.<br />
<br />
'''repeat'''<br />
<br />
1. Centering step.<br />
Compute x⋆(t) by minimizing tf0 + φ, subject to Ax = b, starting at x.<br />
<br />
2. Update. x := x⋆(t).<br />
<br />
3. Stopping criterion. quit if m/t < ǫ.<br />
<br />
4. Increase t. t := µt.<br />
<br />
Compute <math>x^*(t)</math> by minimizing <math>tf_0 + \phi </math><br />
<br />
=Primal-Dual IP Algorithm=<br />
The primal-dual interior-point method can easily be understood by using the simplest NLP problem; one with only inequality constraints. <br />
Consider the following:<br />
<br />
minimize <math>f(x)</math> <br />
s.t. <math>g(x) \ge 0~~ x \in \mathbb{R}^n, g(x) \in \mathbb{R}^m~~~~~~(1)</math>.<br />
=Example=<br />
<br />
=Conclusion=<br />
<br />
<br />
<br />
== Sources ==<br />
1. R.J. Vanderbei, Linear Programming: Foundations and Extensions (Chp 17-22). Springer, 2008.<br><br />
2. J. Nocedal, S. J. Wright, Numerical optimization (Chp 14). Springer, 1999. <br><br />
3. S. Boyd, L. Vandenberghe, Convex Optimization (Chp 11). Cambridge University Press, 2009</div>Voitekhttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_LPInterior-point method for LP2014-05-25T21:16:43Z<p>Voitek: </p>
<hr />
<div><br />
Authors: John Plaxco, Alex Valdes, Wojciech Stojko. (ChE 345 Spring 2014) <br><br />
Steward: Dajun Yue, Fengqi You<br><br />
Date Presented: May 25, 2014 <br><br />
=Introduction and Uses=<br />
Interior point methods are a type of algorithm that are used in solving both linear and nonlinear convex optimization problems that contain inequalities as constraints. The LP Interior-Point method relies on having a linear programming model with the objective function and all constraints being continuous and twice continuously differentiable. In general, a problem is assumed to be strictly feasible, and will have a dual optimal that will satisfy Karush-Kuhn-Tucker (KKT) constraints described below. The problem is solved (assuming there IS a solution) either by iteratively solving for KKT conditions or to the original problem with equality instead of inequality constraints, and then applying Newton's method to these conditions.<br />
<br />
Interior point methods came about from a desire for algorithms with better theoretical bases than the simplex method. While the two strategies are similar in a few ways, the interior point methods involve relatively expensive (in terms of computing) iterations that quickly close in on a solution, while the simplex method involves usually requires many more inexpensive iterations. From a geometric standpoint, interior point methods approach a solution from the interior or exterior of the feasible region, but are never on the boundary.<br />
<br />
Two of the most important interior point algorithms are the barrier method and the primal-dual interior point method.<br />
<br />
<br />
=Algorithm=<br />
<br />
=Example=<br />
<br />
=Conclusion=<br />
<br />
<br />
<br />
== Sources ==<br />
1. R.J. Vanderbei, Linear Programming: Foundations and Extensions (Chp 17-22). Springer, 2008.<br><br />
2. J. Nocedal, S. J. Wright, Numerical optimization (Chp 14). Springer, 1999. <br><br />
3. S. Boyd, L. Vandenberghe, Convex Optimization (Chp 11). Cambridge University Press, 2009</div>Voitekhttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_LPInterior-point method for LP2014-05-25T21:12:57Z<p>Voitek: /* Introduction */</p>
<hr />
<div><br />
Authors: John Plaxco, Alex Valdes, Wojciech Stojko. (ChE 345 Spring 2014) <br><br />
Steward: Dajun Yue, Fengqi You<br><br />
Date Presented: May 25, 2014 <br><br />
=Introduction=<br />
Interior point methods are a type of algorithm that are used in solving both linear and nonlinear convex optimization problems that contain inequalities as constraints. The LP Interior-Point method relies on having a linear programming model with the objective function and all constraints being continuous and twice continuously differentiable. In general, a problem is assumed to be strictly feasible, and will have a dual optimal that will satisfy Karush-Kuhn-Tucker (KKT) constraints described below. The problem is solved (assuming there IS a solution) either by iteratively solving for KKT conditions or to the original problem with equality instead of inequality constraints, and then applying Newton's method to these conditions.<br />
<br />
Interior point methods came about from a desire for algorithms with better theoretical bases than the simplex method. While the two strategies are similar in a few ways, the interior point methods involve relatively expensive (in terms of computing) iterations that quickly close in on a solution, while the simplex method involves usually requires many more inexpensive iterations. From a geometric standpoint, interior point methods approach a solution from the interior or exterior of the feasible region, but are never on the boundary.<br />
<br />
Two of the most important interior point algorithms are the barrier method and the primal-dual interior point method.<br />
<br />
=Uses=<br />
<br />
=Algorithm=<br />
<br />
=Example=<br />
<br />
=Conclusion=<br />
<br />
<br />
<br />
== Sources ==<br />
1. R.J. Vanderbei, Linear Programming: Foundations and Extensions (Chp 17-22). Springer, 2008.<br><br />
2. J. Nocedal, S. J. Wright, Numerical optimization (Chp 14). Springer, 1999. <br><br />
3. S. Boyd, L. Vandenberghe, Convex Optimization (Chp 11). Cambridge University Press, 2009</div>Voitekhttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_LPInterior-point method for LP2014-05-25T20:33:33Z<p>Voitek: /* Introduction */</p>
<hr />
<div><br />
Authors: John Plaxco, Alex Valdes, Wojciech Stojko. (ChE 345 Spring 2014) <br><br />
Steward: Dajun Yue, Fengqi You<br><br />
Date Presented: May 25, 2014 <br><br />
=Introduction=<br />
Interior point methods are a type of algorithm that are used in solving both linear and nonlinear convex optimization problems that contain inequalities as constraints. The LP Interior-Point method relies on having a linear programming model with the objective function and all constraints being continuous and twice continuously differentiable. The problem is solved (assuming there IS a solution) either by iteratively solving for Karun-Kush-Tucker (KKT) conditions or to the original problem with equality instead of inequality constraints, and then applying Newton's method to these conditions.<br />
<br />
=Uses=<br />
<br />
=Algorithm=<br />
<br />
=Example=<br />
<br />
=Conclusion=<br />
<br />
<br />
<br />
== Sources ==<br />
1. R.J. Vanderbei, Linear Programming: Foundations and Extensions (Chp 17-22). Springer, 2008.<br><br />
2. J. Nocedal, S. J. Wright, Numerical optimization (Chp 14). Springer, 1999. <br><br />
3. S. Boyd, L. Vandenberghe, Convex Optimization (Chp 11). Cambridge University Press, 2009</div>Voitekhttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_LPInterior-point method for LP2014-05-25T20:26:50Z<p>Voitek: /* Introduction */</p>
<hr />
<div><br />
Authors: John Plaxco, Alex Valdes, Wojciech Stojko. (ChE 345 Spring 2014) <br><br />
Steward: Dajun Yue, Fengqi You<br><br />
Date Presented: May 25, 2014 <br><br />
=Introduction=<br />
Interior point methods are a type of algorithm that are used in solving both linear and nonlinear convex optimization problems that contain inequalities as constraints. The LP Interior-Point method relies on having a linear programming model with the objective function and all constraints being continuous and twice continuously differentiable.<br />
<br />
=Uses=<br />
<br />
=Algorithm=<br />
<br />
=Example=<br />
<br />
=Conclusion=<br />
<br />
<br />
<br />
== Sources ==<br />
1. R.J. Vanderbei, Linear Programming: Foundations and Extensions (Chp 17-22). Springer, 2008.<br><br />
2. J. Nocedal, S. J. Wright, Numerical optimization (Chp 14). Springer, 1999. <br><br />
3. S. Boyd, L. Vandenberghe, Convex Optimization (Chp 11). Cambridge University Press, 2009</div>Voitekhttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_LPInterior-point method for LP2014-05-25T20:26:07Z<p>Voitek: /* Introduction */</p>
<hr />
<div><br />
Authors: John Plaxco, Alex Valdes, Wojciech Stojko. (ChE 345 Spring 2014) <br><br />
Steward: Dajun Yue, Fengqi You<br><br />
Date Presented: May 25, 2014 <br><br />
=Introduction=<br />
Interior point methods are a type of algorithm that are used in solving both linear and nonlinear convex optimization problems. More specifically, convex optimization problems that contain inequalities as constraints. The LP Interior-Point method relies on having a linear programming model with the objective function and all constraints being continuous and twice continuously differentiable.<br />
<br />
=Uses=<br />
<br />
=Algorithm=<br />
<br />
=Example=<br />
<br />
=Conclusion=<br />
<br />
<br />
<br />
== Sources ==<br />
1. R.J. Vanderbei, Linear Programming: Foundations and Extensions (Chp 17-22). Springer, 2008.<br><br />
2. J. Nocedal, S. J. Wright, Numerical optimization (Chp 14). Springer, 1999. <br><br />
3. S. Boyd, L. Vandenberghe, Convex Optimization (Chp 11). Cambridge University Press, 2009</div>Voitekhttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_LPInterior-point method for LP2014-05-25T20:25:20Z<p>Voitek: </p>
<hr />
<div><br />
Authors: John Plaxco, Alex Valdes, Wojciech Stojko. (ChE 345 Spring 2014) <br><br />
Steward: Dajun Yue, Fengqi You<br><br />
Date Presented: May 25, 2014 <br><br />
=Introduction=<br />
Interior point methods are a type of algorithm that are used in solving both linear and nonlinear convex optimization problems. More specifically, convex optimization problems that contain inequalities as constraints. The Interior-Point method relies on having a linear programming model with the objective function and all constraints being continuous and twice continuously differentiable.<br />
<br />
=Uses=<br />
<br />
=Algorithm=<br />
<br />
=Example=<br />
<br />
=Conclusion=<br />
<br />
<br />
<br />
== Sources ==<br />
1. R.J. Vanderbei, Linear Programming: Foundations and Extensions (Chp 17-22). Springer, 2008.<br><br />
2. J. Nocedal, S. J. Wright, Numerical optimization (Chp 14). Springer, 1999. <br><br />
3. S. Boyd, L. Vandenberghe, Convex Optimization (Chp 11). Cambridge University Press, 2009</div>Voitekhttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_LPInterior-point method for LP2014-05-25T20:24:00Z<p>Voitek: </p>
<hr />
<div>Claimed by John Plaxco, Alex Valdes, Wojciech Stojko.<br><br />
<br />
=Introduction=<br />
Interior point methods are a type of algorithm that are used in solving both linear and nonlinear convex optimization problems. More specifically, convex optimization problems that contain inequalities as constraints. The Interior-Point method relies on having a linear programming model with the objective function and all constraints being continuous and twice continuously differentiable.<br />
<br />
=Uses=<br />
<br />
=Algorithm=<br />
<br />
=Example=<br />
<br />
=Conclusion=<br />
<br />
<br />
<br />
== Sources ==<br />
1. R.J. Vanderbei, Linear Programming: Foundations and Extensions (Chp 17-22). Springer, 2008.<br><br />
2. J. Nocedal, S. J. Wright, Numerical optimization (Chp 14). Springer, 1999. <br><br />
3. S. Boyd, L. Vandenberghe, Convex Optimization (Chp 11). Cambridge University Press, 2009</div>Voitekhttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_LPInterior-point method for LP2014-05-25T20:23:09Z<p>Voitek: /* Sources */</p>
<hr />
<div>Claimed by John Plaxco, Alex Valdes, Wojciech Stojko.<br><br />
Sources 4 and 5 have a chapter each devoted to our topic. Source 3 has a long section of chapters. Other two sources mention it, and the rest of the books do not have the topic.<br />
<br />
=Introduction=<br />
Interior point methods are a type of algorithms that are used in solving both linear and nonlinear convex optimization problems.convex optimization problems that contain inequalities as constraints.The Interior-Point method relies on having a linear programming model with the objective function and all constraints being continuous and twice continuously differentiable.<br />
<br />
=Uses=<br />
<br />
=Algorithm=<br />
<br />
=Example=<br />
<br />
=Conclusion=<br />
<br />
<br />
<br />
== Sources ==<br />
1. R.J. Vanderbei, Linear Programming: Foundations and Extensions (Chp 17-22). Springer, 2008.<br><br />
2. J. Nocedal, S. J. Wright, Numerical optimization (Chp 14). Springer, 1999. <br><br />
3. S. Boyd, L. Vandenberghe, Convex Optimization (Chp 11). Cambridge University Press, 2009</div>Voitekhttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_LPInterior-point method for LP2014-05-25T20:22:08Z<p>Voitek: /* Introduction */</p>
<hr />
<div>Claimed by John Plaxco, Alex Valdes, Wojciech Stojko.<br><br />
Sources 4 and 5 have a chapter each devoted to our topic. Source 3 has a long section of chapters. Other two sources mention it, and the rest of the books do not have the topic.<br />
<br />
=Introduction=<br />
The Interior-Point method relies on having a linear programming model with the objective function and all constraints being continuous and twice continuously differentiable.<br />
<br />
=Uses=<br />
<br />
=Algorithm=<br />
<br />
=Example=<br />
<br />
=Conclusion=<br />
<br />
<br />
<br />
== Sources ==<br />
1. T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, Optimization of chemical processes (pp 242-291). McGraw-Hill, 2001<br><br />
2. Bradley, Hax, and Magnanti, Applied Mathematical Programming (p 413).<br><br />
3. R.J. Vanderbei, Linear Programming: Foundations and Extensions (Chp 17-22). Springer, 2008.<br><br />
4. J. Nocedal, S. J. Wright, Numerical optimization (Chp 14). Springer, 1999. <br><br />
5. S. Boyd, L. Vandenberghe, Convex Optimization (Chp 11). Cambridge University Press, 2009</div>Voitekhttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_LPInterior-point method for LP2014-05-25T20:21:33Z<p>Voitek: /* Introduction */</p>
<hr />
<div>Claimed by John Plaxco, Alex Valdes, Wojciech Stojko.<br><br />
Sources 4 and 5 have a chapter each devoted to our topic. Source 3 has a long section of chapters. Other two sources mention it, and the rest of the books do not have the topic.<br />
<br />
=Introduction=<br />
The Interior-Point method relies on having a linear programming model with the objective function being continuous and twice continuously differentiable.<br />
<br />
=Uses=<br />
<br />
=Algorithm=<br />
<br />
=Example=<br />
<br />
=Conclusion=<br />
<br />
<br />
<br />
== Sources ==<br />
1. T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, Optimization of chemical processes (pp 242-291). McGraw-Hill, 2001<br><br />
2. Bradley, Hax, and Magnanti, Applied Mathematical Programming (p 413).<br><br />
3. R.J. Vanderbei, Linear Programming: Foundations and Extensions (Chp 17-22). Springer, 2008.<br><br />
4. J. Nocedal, S. J. Wright, Numerical optimization (Chp 14). Springer, 1999. <br><br />
5. S. Boyd, L. Vandenberghe, Convex Optimization (Chp 11). Cambridge University Press, 2009</div>Voitekhttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_LPInterior-point method for LP2014-05-25T18:30:39Z<p>Voitek: </p>
<hr />
<div>Claimed by John Plaxco, Alex Valdes, Wojciech Stojko.<br><br />
Sources 4 and 5 have a chapter each devoted to our topic. Source 3 has a long section of chapters. Other two sources mention it, and the rest of the books do not have the topic.<br />
<br />
<br />
<br />
<br />
<br />
<br />
== Sources ==<br />
1. T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, Optimization of chemical processes (pp 242-291). McGraw-Hill, 2001<br><br />
2. Bradley, Hax, and Magnanti, Applied Mathematical Programming (p 413).<br><br />
3. R.J. Vanderbei, Linear Programming: Foundations and Extensions (Chp 17-22). Springer, 2008.<br><br />
4. J. Nocedal, S. J. Wright, Numerical optimization (Chp 14). Springer, 1999. <br><br />
5. S. Boyd, L. Vandenberghe, Convex Optimization (Chp 11). Cambridge University Press, 2009</div>Voitekhttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_LPInterior-point method for LP2014-05-25T18:29:19Z<p>Voitek: /* Sources */</p>
<hr />
<div>Claimed by John Plaxco, Alex Valdes, Wojciech Stojko.<br />
<br />
<br />
<br />
<br />
<br />
<br />
== Sources ==<br />
1. T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, Optimization of chemical processes (pp 242-291). McGraw-Hill, 2001<br><br />
2. Bradley, Hax, and Magnanti, Applied Mathematical Programming (p 413).<br><br />
3. R.J. Vanderbei, Linear Programming: Foundations and Extensions (Chp 17-22). Springer, 2008.<br><br />
4. J. Nocedal, S. J. Wright, Numerical optimization (Chp 14). Springer, 1999. <br><br />
5. S. Boyd, L. Vandenberghe, Convex Optimization (Chp 11). Cambridge University Press, 2009</div>Voitekhttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_LPInterior-point method for LP2014-05-25T18:20:23Z<p>Voitek: /* Sources */</p>
<hr />
<div>Claimed by John Plaxco, Alex Valdes, Wojciech Stojko.<br />
<br />
<br />
<br />
<br />
<br />
<br />
== Sources ==<br />
1. T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, Optimization of chemical processes (pp 242-291). McGraw-Hill, 2001<br><br />
2. Bradley, Hax, and Magnanti, Applied Mathematical Programming (p 413).<br />
3.</div>Voitekhttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_LPInterior-point method for LP2014-05-25T18:19:31Z<p>Voitek: /* Sources */</p>
<hr />
<div>Claimed by John Plaxco, Alex Valdes, Wojciech Stojko.<br />
<br />
<br />
<br />
<br />
<br />
<br />
== Sources ==<br />
1. T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, Optimization of chemical processes (pp 242-291). McGraw-Hill, 2001<br />
2. Bradley, Hax, and Magnanti, Applied Mathematical Programming (p 413).<br />
3.</div>Voitekhttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_LPInterior-point method for LP2014-05-25T18:17:09Z<p>Voitek: </p>
<hr />
<div>Claimed by John Plaxco, Alex Valdes, Wojciech Stojko.<br />
<br />
<br />
<br />
<br />
<br />
<br />
== Sources ==<br />
1. T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, Optimization of chemical processes (pp 242-291). McGraw-Hill, 2001</div>Voitek