Trust-region methods

From optimization
Revision as of 20:56, 25 May 2014 by Yewenhe0904 (Talk | contribs)

Jump to: navigation, search

Authors: Wenhe (Wayne) Ye (ChE 345 Spring 2014) Steward: Dajun Yue, Fengqi You Date Presented: Apr. 10, 2014

Contents

Introduction

Trust-region method (TRM) is one of the most important numerical optimization method in solving nonlinear programing (NLP) problems. It works in a way that first define a region around the current best solution, in which a certain model (usually a quadratic model) can to some extent approximate the original objective function. TRM then take a step forward according to the model depicts within the region. Unlike the line search method, TRM usually determines both the step size and direction at the same time. If a notable decrease (our following discussion will based on minimization problems) is gained after the step forward, then the model is believed to be a good representation of the original objective function. If the improvement is too subtle or even a negative improvement is gained, the region is not to be believed as a good representation of the original objective function. The convergence is thus ensured that the size of the “trust region” (usually the radius in Euclidean norm) in each iteration would depend on the improvement previously made.

Trust-Region Method Overview.png

Important Concepts

Trust-region

In most cases, the trust-region is defined as a spherical area of radius \Delta_k in which the trust-region subproblem lies.

Trust-region subproblem

If we are using the quadratic model to approximate the original objective function, then our optimization problem is essentially reduced to solving a sequence of trust-region subporblems


min~m_k(p)=f_k+{g_k}^Tp+\frac{1}{2}p^TB_kp

s.t.~||p||<=\Delta_k


Where \Delta_k is the trust region radius, g_k is the gradient at current point and B_k is the hessian (or a hessian approximation). It is easy to find the solution to the trust-region subproblem if B_k is positive definite.

Actual reduction and predicted reduction

The most critical issue underlying the trust-region method is to update the size of the trust-region at every iteration. If the current iteration makes a satisfactory reduction, we may exploits our model more in the next iteration by setting a larger \Delta_k. If we only achieved a limited improvement after the current iteration, the radius of the trust-region then should not have any increase, or in the worst cases, we may decrease the size of the trust-region by adjusting the radius to a smaller value to check the model’s validity.

\rho_k=\frac{f(x_k)-f(x_k+p_k)}{m_k(0)-m_k(p_k)}

Whether to take a more ambitious step or a more conservative one is depend on the ratio between the actual reduction gained by true reduction in the original objective function and the predicted reduction expected in the model function. Empirical threshold values of the ratio \rho_k will guide us in determining the size of the trust-region.

TRM stepsize.png

The picture shows both the stepsize and the improving direction is a consequence of a pre-determined trust-region size

Trust Region Algorithm

Before implementing the trust-region algorithm, we should first determine several parameters. \Delta_M is the upper bound for the size of the trust region. \eta_1, \eta_2 and \eta_3,T_1,T_2 are the threshold values for evaluating the goodness of the quadratic model thus for determining the trust-region’s size in the next iteration.


Pseudo-code

Decide the starting point at x_0, set the iteration number k=1

For k=1,2...

Get the improving step by solving trust-region subproblem ()

Evaluate \rho_k from equation()

If \rho_k<\eta_2

\Delta_k+1=T_1*\Delta_k

Else

If \rho_k<\eta_3 and p_k=||\Delta_k|| (full step and model is a good approximation)

\Delta_k+1=min(T_2\Delta_k,\Delta_M)

Else

\Delta_k+1=\Delta_k

If \rho_k>\eta_1

x_k+1=x_k+p_k

Else

x_k+1=x_k(the model is not a good approximation and need to solve another trust-region subproblem within a smaller trust-region)

end >


Methods of Solving the Trust-region Subproblem

Cauchy point calculation

In line search methods, we may find an improving direction from the gradient information, that is, by taking the steepest descent direction with regard to the maximum range we could make. We can solve the trust-region subproblem in an inexpensive way. This method is also denoted as the Cauchy point calculation. We can also express the improving step explicitly by the following closed-form equations

{p_k}^C=-\tau_k\dfrac{\Delta_k}{||g_k||}g_k

if {g_k}^TB_kg_k<=0 \tau_k=1

otherwise \tau_k=min~ ({||g_k||}^3/(\Delta_k{g_k}^TB_kg_k),1)

TRM CauchyPoint.png

Limitations and Further Improvements

Though Cauchy point is cheap to implement, like the steepest descent method, it performs poorly in some cases. Varies kinds of improvements are based on including the curvature information from B_k.

Dogleg Method

If B_k is positive definite (we can use quasi-Newton hessian approximation to guarantee), then a V-shaped trajectory can be determined by

if 0<=\tau<=1,~~p(\tau)=\tau p^U

if 1<=\tau<=2,~~p(\tau)=\tau p^U+(\tau-1)(p^B-p^U)

where p^U=-\frac{g^Tg}{g^TBg}g is the steepest descent direction

Note that hessian or approximate hessian will be evaluated

TRM Dogleg.png

Conjugated Gradient Steihaug’s Method

The most widely used method for solving a trust-region subproblem is by using the idea of conjugated gradient (CG) method for minimizing a quadratic function since CG guarantees a convergence within finite steps for a quadratic programing. Also, CG Steihaug’s method has the merit of Cauchy point and Dogleg method that both in terms of superlinear convergence rate and inexpensiveness to compute.


Pseudo-code for CG Steihaug method in solving trust region subproblem

Given tolerance \epsilon_k > 0;

Set z_0=0, r_0=\nabla f_k, Failed to parse(lexing error): d_0=−{r_0} =−{\nabla} f_k</math>

if ||r_0|| <\epsilon_k

return p_k = z_0 = 0;

for j = 0, 1, 2, . . .

if {d_j}^TB_k d_j <= 0

Find \tau such that p_k = z_j + \tau d_j minimizes m_k(p_k)

and satisfies ||p_k|| =  \Delta_k ;

return p_k ;

Set \alpha_j = {r_j}^Tr_j /{d_j}^TB_kd_j;

Set z_{j+1} = z_j + \alpha_jd_j ;

if ||z_{j+1}|| >= \Delta_k

Find \tau >= 0 such that p_k = z_j + \tau d_j satisfies ||p_k|| =  \Delta_k ;

return p_k ;

Set r_{j+1} = r_j + \alpha_j B_kd_j ;

if ||r_{j+1}|| <\epsilon_k

return p_k = z_{j+1};

Set \beta_{j+1} = \frac{{r_{j+1}}^T{r_{j+1}}}{{r_j}^Tr_j} ;

Set Failed to parse(lexing error): d_{j+1}=−r_{j+1}+\beta_{j+1}d_j


end (for).


Conclusion

Trust-Region vs. Line Search

Line search methods:

- Pick improving direction

- Pick stepsize to minimize

- Update the incumbent solution


Trust-region methods:

- Pick the stepsize (the trust-region subproblem is constrained)

- Solving the subproblem using the approximated model

- If the improvement is acceptable, update the incumbent solution and the size of the trust-region

References

1. J. Nocedal, S. J. Wright, Numerical optimization. Springer, 1999.

2. Wikipedia page for Branch and bound

3. ...To be Modified