Optimization with absolute values

From optimization
Jump to: navigation, search

Authors: Benjamin Granger, Marta Yu, Kathleen Zhou (ChE 345 Spring 2014)

Steward: Dajun Yue, Fengqi You

Date Presented: May 25, 2014



Optimization with absolute values is a special case of linear programming in which a problem made nonlinear due to the presence of absolute values is solved using linear programming methods.

Absolute value functions themselves are very difficult to perform standard optimization procedures on. They are not continuously differentiable functions, are nonlinear, and are relatively difficult to operate on. However, through simple manipulation of the absolute value expression, these difficulties can be avoided and the problem can be solved using linear programming.


Linear reformulation

If an absolute value function contains a linear function, then the absolute value can be reformulated into two linear expressions. In other words, z=|f(x)| can be reformulated into two linear expressions if the function f(x) is linear itself.

For the simplest case, take z=|x|. This function is effectively the combination two piecewise functions: z=x if x>=0 and -z=x if x<0. This methodology is the basis of performing linear programming with absolute values.

Absolute values in constraints

In the case |f(x)| \le z, the expression can be reformulated as f(x) \le z and -f(x) \le z. This relation is easiest to see using a number line, as follows:

Absolute value number line.PNG Figure 1: Number line depicting the above absolute value problem.

This number line represents both the absolute value function as well as the two combined linear functions described above, demonstrating that the two formulations are equivalent.

The same logic can be used to reformulate |f(x)| \ge z as f(x) \ge z and -f(x) \ge z, or |f(x)|+g(y) \le z into f(x)+g(y) \le z and -f(x)+g(y) \le z, for example. [1]

Absolute values in the objective function

Absolute values as part of the objective function of a model can also be reformulated to become linear, in certain cases. If the objective is a minimization problem of the form |f(x)|+g(y) or is a maximization problem of the form -|f(x)|+g(y), then the model can easily be reformulated to be solved using linear programming. If the objective is a minimization problem of the form -|f(x)|+g(y) or is a maximization problem of the form |f(x)|+g(y), unfortunately, the model cannot be reformulated as a standard LP model, and instead must be solved using mixed-integer linear programming.

In the former two cases, the model can be reformulated by introducing a new variable, U, replacing x in the original objective function with U, and adding two extra constraints x \le U and -x \le U. [2]

Successive linear programming

Successive linear programming methods involve generating and solving a sequence of linear programming problems to ultimately solve one absolute value problem. Olvi Mangasarian of the Computer Sciences Department of the University of Wisconsin-Madison worked on an algorithm for solving absolute value problems in this way, described in the paper Absolute Value Equation Solution via Linear Programming. [3] The algorithm is summarized as follows.

Solving the absolute value equation Ax-|x|=b

Choose a value \epsilon, a tolerance, and a maximum number of iterations. Typical values are approximately 10^-6, 10^-8, and 10, respectively. Note that the algorithm starts at iteration i=0.

Step 1: Begin by solving the following linear program to determine initial primal (x^0, y^0) and dual (u^0, v^0, w^0) optimal solutions.

Minimize e'y

S.t. Ax-y=b; x+y \ge 0; -x+y \ge 0

Step 2: Check to see if either of two conditions are true. First, if there are zero components satisfying ||Ax^i-|x^i|-b||_\infty. Or second, if the maximum number of iterations has been reached. If either condition occurs, then stop iterating.

Step 3: Solve the following primal linear program to determine new dual optimal variables u^{i+1}, v^{i+1}, w^{i+1}.

Minimize max \{ -u^i+\epsilon, \epsilon e \} 'y

S.t. Ax-y=b; x+y \ge 0; -x+y \ge 0

Step 4: Increase iteration number: i=i+1. Then, return to Step 2.

In completing this algorithm, the optimal solution to the original problem will be x^{i+1} [4]


Minimizing the sum of absolute deviations

Let deviations be represented by \epsilon_i = Y_i - \sum_{j} X_{ji}b_i, where i is the i^{th} observation, \epsilon_i gives the deviation, Y_i is an observation. To minimize the deviation, the problem is formulated in a basic form as:
\ Min    \sum_{i} |\epsilon_i|
as the objective function, and linear constraints are
\             s.t.     \epsilon_i + \sum_{j}X_{ji}b_i=Y_i \text{for all i} \                       \epsilon_i, b_i  \in \mathbb{R}

The nonlinearity in this form generates from the absolute value function. However, by substituting for \ x_i , the problem can be transformed into a linear problem. In the linearization, x_i= \epsilon_i^+-\epsilon_i^- where \epsilon_i^+, \epsilon_i^- are positive variables. [5]

So the problem can be rewritten as
\ Min    \sum_{i} |\epsilon_i^+-\epsilon_i^-|
\ s.t. \ \epsilon_i^+-\epsilon_i^- + \sum_{j}X_{ji}b_i=Y_i for all i
\                       \epsilon_i=\epsilon_i^+-\epsilon_i^- for all i
\                       \epsilon_i, b_i  \in \mathbb{R} , \epsilon_i^+, \epsilon_i^-\geq 0

It can be proved that at the optimal solution, we have \epsilon_i^+*\epsilon_i^- = 0. Thus, the problem is transformed into a LP problem, since \epsilon_i=\epsilon_i^++\epsilon_i^- = 0. The final form of the problem is:
\ Min    \sum_{i} (\epsilon_i^++\epsilon_i^-)
\ s.t. \ \epsilon_i^+-\epsilon_i^- + \sum_{j}X_{ji}b_i=Y_i for all i
\                       \epsilon_i=\epsilon_i^+-\epsilon_i^- for all i
\                       \epsilon_i, b_i  \in \mathbb{R} , \epsilon_i^+, \epsilon_i^-\geq 0

Minimizing the maximum of absolute values

Min Max |\epsilon_i|
s.t.  \epsilon_i = Y_i- \sum_{j} X_{ji}b_j for all i
\epsilon_i, b_j \in \mathbb{R} for all i and j

Where the variable \epsilon_i is the deviation under the i^th observation and b_j is the j^th parameter in the equation.

To solve this problem, we define a variable \epsilon that will satisfy the following two inequalities:

\epsilon \geq Y_i - \sum_{j} X_{ji}b_j
\epsilon \geq -(Y_i - \sum_{j} X_{ji}b_j)

The inequalities ensure that \epsilon will greater than or equal to the largest |\epsilon_i|. Thus, the problem can be written in the form

Min \epsilon

s.t.  -\epsilon - \sum_{j} X_{ji}b_j \leq -Y_i  for all i
-\epsilon + \sum_{j} X_{ji}b_j \leq Y_i for all i
\epsilon \geq 0, b_j \in \mathbb{R} for all j

Numerical Example

\ Min      |x_1| + 3|x_2|
\ s.t.      x_1 + 2x_2 \ge 10

x_1 - x_2 = 5

We replace the absolute value quantities with a single variable:

|x_1| = U_1

|x_2| = U_2

|x_3| = U_3

We must introduce additional constraints to ensure we do not lose any information by doing this substitution:

 -x_1 \le U_1 \le x_1

 -x_2 \le U_2 \le x_2

 -x_3 \le U_3 \le x_3

The problem has now been reformulated as a linear programming problem that can be solved normally:

\ Min   U_1 + 3U_2

\             s.t.     x_1 + 2x_2 \ge 10

x_1 - x_2 = 5

 -x_1 \le U_1 \le x_1

 -x_2 \le U_2 \le x_2

 -x_3 \le U_3 \le x_3

The optimum value for the objective function is 40/3, which occurs when x_1 = 25/3 and x_2 = -5/3.


1. S. Boyd, L. Vandenberghe, Convex Optimization, Cambridge University Press, 2004.

2. Absolute Values. Retrieved from http://lpsolve.sourceforge.net/5.1/absolute.htm.

3. O.L. Mangasarian, Absolute Value Equation Solution via Linear Programming. Retrieved from ftp://ftp.cs.wisc.edu/pub/dmi/tech-reports/13-01.pdf.

4. O.L. Mangasarian, Absolute Value Equation Solution via Concave Minimization. Retrieved from ftp://ftp.cs.wisc.edu/pub/dmi/tech-reports/06-02.pdf.

5. T.S. Ferguson, Linear Programming: A Concise Introduction. Retrieved from http://www.usna.edu/Users/weapsys/avramov/Compressed%20sensing%20tutorial/LP.pdf.

6. B.A. McCarl, T.H. Spreen, Linear Programming Modeling: Nonlinearities and Approximation. Retrieved from http://agecon2.tamu.edu/people/faculty/mccarl-bruce/mccspr/new09.pdf.