Spatial branch and bound method

From optimization
Revision as of 14:36, 24 May 2015 by Ezhuang (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Authors: Ellen Zhuang (ChE 345 Spring 2015) Steward: Dajun Yue, Fengqi You


Spatial branch-and-bound is a divide-and-conquer technique used to find the deterministic solution of global optimization problems. It is a type of branch-and-bound method, which solves for the set of parameters that optimize the objective function, whether that be finding the minimum or maximum value of f(x) or -f(x), respectively, where x exists over a set of candidate solutions that exist in the feasible region of the problem. Branch-and-bound is the systematic enumeration of possible solutions by iteratively searching the space of the problem. The problem’s candidate solutions form a rooted tree. The algorithm analyzes the nodes, or the subsets of the solution set. This step is known as branching. Bounding is the estimation of a lower and upper bound for the optimal solution at each node. The possible solution at each branch is checked against these bounds, and if it does not produce a better solution than the one found thus far by the technique, the solution is discarded in a step called pruning. This algorithm is repeated until the optimal solution is found. The method was first proposed by A. H. Land and A. G. Doig in 1960 for discrete programming, and its first applications were for discrete problems like the traveling salesman problem.

Spatial branch-and-bound techniques solve mixed integer nonlinear programs with non-convex relaxations. Unlike ordinary branch-and-bound methods for mixed integer linear programs, the spatial branch-and-bound method decomposes the nonlinear functions of the original problem symbolically and recursively with simple operators into simple functions.


The main algorithmic idea is to divide the problem into smaller subproblems by branching the variables into possible values, creating upper and lower bounds of the function in the certain domain, and finding sequences of approximate solutions which converge to the actual solution. The smaller subproblems are easier to solve than the original problem.

Convex relaxation is performed on the original nonconvex problem, which underestimates the optimal value of the original function. The original problem is then allocated in the root node of the tree. With each iteration, the original problem is restricted , and the sub-regions are relaxed to make convex. For each subregion, the lower and upper bounds of the optimal value of the objective function is found. If the bounds converge, then the global optimum value to the subregion is found. If the bounds do not meet the desired tolerance, the problem is partitioned into smaller subproblems (descendants) by dividing the feasible space of continuous variable (branching variable). The two subproblems are solved by recursive partitioning. If the subproblem does not have the global optimal solution, then it is pruned. The process is continued until the global optimum value of the original problem is found.