Optimization with absolute values
Authors: Benjamin Granger, Marta Yu, Kathleen Zhou (ChE 345 Spring 2014)
Steward: Dajun Yue, Fengqi You
Date Presented: May 25, 2014
Contents |
Introduction
Optimization with absolute values is a special case of linear programming in which a problem made nonlinear due to the presence of absolute values is solved using linear programming methods.
Absolute value functions themselves are very difficult to perform standard optimization procedures on. They are not continuously differentiable functions, are nonlinear, and are relatively difficult to operate on. However, through simple manipulation of the absolute value expression, these difficulties can be avoided and the problem can be solved using linear programming.
Methods
Linear reformulation
If an absolute value function contains a linear function, then the absolute value can be reformulated into two linear expressions. In other words, can be reformulated into two linear expressions if the function
is linear itself.
For the simplest case, take . This function is effectively the combination two piecewise functions:
if
and
if
. This methodology is the basis of performing linear programming with absolute values.
Absolute values in constraints
In the case , the expression can be reformulated as
and
. This relation is easiest to see using a number line, as follows:
Figure 1: Number line depicting the above absolute value problem.
This number line represents both the absolute value function as well as the two combined linear functions described above, demonstrating that the two formulations are equivalent.
The same logic can be used to reformulate as
and
, or
into
and
, for example. [1]
Absolute values in the objective function
Absolute values as part of the objective function of a model can also be reformulated to become linear, in certain cases. If the objective is a minimization problem of the form or is a maximization problem of the form
, then the model can easily be reformulated to be solved using linear programming. If the objective is a minimization problem of the form
or is a maximization problem of the form
, unfortunately, the model cannot be reformulated as a standard LP model, and instead must be solved using mixed-integer linear programming.
In the former two cases, the model can be reformulated by introducing a new variable, , replacing
in the original objective function with
, and adding two extra constraints
and
. [2]
Successive linear programming
Successive linear programming methods involve generating and solving a sequence of linear programming problems to ultimately solve one absolute value problem. Olvi Mangasarian of the Computer Sciences Department of the University of Wisconsin-Madison worked on an algorithm for solving absolute value problems in this way, described in the paper Absolute Value Equation Solution via Linear Programming. [3] The algorithm is summarized as follows.
Solving the absolute value equation
Choose a value , a tolerance, and a maximum number of iterations. Typical values are approximately 10^-6, 10^-8, and 10, respectively. Note that the algorithm starts at iteration
.
Step 1:
Begin by solving the following linear program to determine initial primal and dual
optimal solutions.
Minimize
S.t. ;
;
Step 2:
Check to see if either of two conditions are true. First, if there are zero components satisfying . Or second, if the maximum number of iterations has been reached. If either condition occurs, then stop iterating.
Step 3:
Solve the following primal linear program to determine new dual optimal variables .
Minimize
S.t. ;
;
Step 4:
Increase iteration number: . Then, return to Step 2.
In completing this algorithm, the optimal solution to the original problem will be [4]
Applications
Minimizing the sum of absolute deviations
Let deviations be represented by
, where i is the
observation,
gives the deviation,
is an observation. To minimize the deviation, the problem is formulated in a basic form as:
as the objective function, and linear constraints are
The nonlinearity in this form generates from the absolute value function. However, by substituting for , the problem can be transformed into a linear problem. In the linearization,
where
are positive variables. [5]
So the problem can be rewritten as
for all i
for all i
It can be proved that at the optimal solution, we have . Thus, the problem is transformed into a LP problem, since
. The final form of the problem is:
for all i
for all i
Minimizing the maximum of absolute values
for all i
for all i and j
Where the variable is the deviation under the i^th observation and b_j is the j^th parameter in the equation.
To solve this problem, we define a variable that will satisfy the following two inequalities:
The inequalities ensure that will greater than or equal to the largest
. Thus, the problem can be written in the form
for all i
for all i
for all j
[6]
Numerical Example
We replace the absolute value quantities with a single variable:
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:
The problem has now been reformulated as a linear programming problem that can be solved normally:
The optimum value for the objective function is , which occurs when
and
.
References
1. S. Boyd, L. Vandenberghe, Convex Optimization, Cambridge University Press, 2004.
2. Absolute Values. Retrieved from http://lpsolve.sourceforge.net/5.1/absolute.htm.
3. O.L. Mangasarian, Absolute Value Equation Solution via Linear Programming. Retrieved from ftp://ftp.cs.wisc.edu/pub/dmi/tech-reports/13-01.pdf.
4. O.L. Mangasarian, Absolute Value Equation Solution via Concave Minimization. Retrieved from ftp://ftp.cs.wisc.edu/pub/dmi/tech-reports/06-02.pdf.
5. T.S. Ferguson, Linear Programming: A Concise Introduction. Retrieved from http://www.usna.edu/Users/weapsys/avramov/Compressed%20sensing%20tutorial/LP.pdf.
6. B.A. McCarl, T.H. Spreen, Linear Programming Modeling: Nonlinearities and Approximation. Retrieved from http://agecon2.tamu.edu/people/faculty/mccarl-bruce/mccspr/new09.pdf.