Quadratic programming

From optimization
Revision as of 00:16, 24 May 2015 by Jph425 (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Contents

Introduction

Quadratic programming is perhaps the simplest form of non-linear programming, and is widely used in image and signal processing, to optimize financial portfolios, to perform the least-squares method, and in sequential quadratic programming (link), a technique for solving more complex non-linear programming problems.

Problem formulation

A general quadratic programming model is structured as follows:

\text{min} f(x)=q^{T}x + \frac{1}{2} x^{T}Qx
s.t. Ax=a
B\le b
x\ge0

When there are only inequality constraints, the Lagrangean is:


The objective function is arranged such that the vector q contains all of the singly-differentiated linear terms and Q contains all of the twice-differentiated quadratic terms. Q is the Hessian matrix of the objective function. The one-half in front of the quadratic term is to make the remove the coefficient that results from taking the derivative of a second-degree polynomial.

Global solutions

If the objective function is convex over its entire range, then any local minimum found is also the sole global minimum. To analyze the function’s convexity, one can compute its Hessian matrix and verify that all eigenvalues are positive, or, equivalently, one can verify that the matrix Q is positive definite (Texas, word example). This is a sufficient condition, meaning that it does not need to be true in order for a local minimum to be the unique global minimum, but will guarantee this property holds if true (Wisc).

KKT conditions

Solutions can be tested for optimality using Karush-Kuhn-Tucker conditions just as is done for other nonlinear problems:
Condition 1: sum of gradients is zero: