# Difference between revisions of "Column generation algorithms"

Kedricdaly (Talk | contribs) (→Cutting Stock Problem (CSP)) |
Kedricdaly (Talk | contribs) m (→Conclusion) |
||

(54 intermediate revisions by 2 users not shown) | |||

Line 1: | Line 1: | ||

− | Authors: Kedric Daly | + | Authors: Kedric Daly (Spring 2015)<br/> |

Stewards: Dajun Yue, Fengqi You | Stewards: Dajun Yue, Fengqi You | ||

=Introduction= | =Introduction= | ||

− | Column generation algorithms are used for MILP problems. The formulation was initially proposed by Ford and Fulkerson in 1958<span style="font-size: 8pt; position:relative; bottom: 0.3em;">[1]</span>. The main advantage of column generation is that not all possibilities need to be enumerated. Instead, the problem is first formulated as a restricted master problem (RMP). This RMP has as few variables as possible, and new variables are brought into the basis as needed, similar to the simplex method<span style="font-size: 8pt; position:relative; bottom: 0.3em;">[2]</span>. | + | Column generation algorithms are used for MILP problems. The formulation was initially proposed by Ford and Fulkerson in 1958<span style="font-size: 8pt; position:relative; bottom: 0.3em;">[1]</span>. The main advantage of column generation is that not all possibilities need to be enumerated. Instead, the problem is first formulated as a restricted master problem (RMP). This RMP has as few variables as possible, and new variables are brought into the basis as needed, similar to the simplex method<span style="font-size: 8pt; position:relative; bottom: 0.3em;">[2]</span>. By similar to the simplex method, it means that if a column with a negative reduced cost can be found, it is added to the RMP and this process is repeated until no more columns can be added to the RMP. |

=Formulation= | =Formulation= | ||

+ | [[File:Column Generation Flowchart.png|thumb|<div align="center">A simple column generation flowchart</div>]] | ||

The formulation of the column generation problem depends on the type of problem. One common example is the cutting stock problem. However, all cases involve taking the original problem and formulating the RMP as well as a subproblem. The solution of the RMP determines some of the parameters in the subproblem whereas the subproblem will be used to determine if there are any columns which can enter the basis. The subproblem does this by solving for the minimum reduced cost. If the reduced cost is negative, the solution can enter the basis as a new column. If the reduced cost is greater than or equal to zero, the lower bound for the optimal solution has been found, although this may not be an integer solution. | The formulation of the column generation problem depends on the type of problem. One common example is the cutting stock problem. However, all cases involve taking the original problem and formulating the RMP as well as a subproblem. The solution of the RMP determines some of the parameters in the subproblem whereas the subproblem will be used to determine if there are any columns which can enter the basis. The subproblem does this by solving for the minimum reduced cost. If the reduced cost is negative, the solution can enter the basis as a new column. If the reduced cost is greater than or equal to zero, the lower bound for the optimal solution has been found, although this may not be an integer solution. | ||

=Examples= | =Examples= | ||

==Cutting Stock Problem (CSP)== | ==Cutting Stock Problem (CSP)== | ||

− | In the cutting stock problem, the goal is to minimize the waste obtained from cutting rolls of fixed size while fulfilling customer orders. | + | |

+ | In the cutting stock problem, the goal is to minimize the waste obtained from cutting rolls of fixed size (called "raws") while fulfilling customer orders.<br/><br/> | ||

+ | |||

+ | |||

+ | For example, we may have steel rods of length L = 17m, with customer orders for twenty-five 3m length rods, twenty 5m length rods, and fifteen 9m length rods.<br/> | ||

+ | Let <math>l_i</math> be the length a customer demands. Thus, <br/> | ||

+ | :<center> <math>\mathbf{l} = \begin{bmatrix} | ||

+ | l_1 = 3m, & l_2 =6m, & l_3 = 9m | ||

+ | \end{bmatrix}^T</math></center><br/><br/> | ||

+ | Let <math>b_i</math> be the demand for each piece of length <math>li</math>. Thus, <br/> | ||

+ | :<center> <math>\mathbf{b} = \begin{bmatrix} | ||

+ | b_1 = 25m, & b_2 =20m, & b_3 = 18m | ||

+ | \end{bmatrix}^T</math></center><br/><br/> | ||

+ | |||

+ | ===Traditional IP formulation=== | ||

+ | |||

+ | The traditional integer programming formulation for the cutting stock problems involves minimizing the number of rolls that are cut in order to meet demand constraints as well as an overall size constraint. | ||

+ | |||

+ | Let <math>N</math> be the index of available rolls.<br/> | ||

+ | Let <math>y_n</math> be 1 if roll <math>n</math> is cut, and 0 otherwise.<br/> | ||

+ | Let <math>x_i^n</math> be the number of times item <math>i</math> is cut on roll <math>n</math>.<br/> | ||

+ | The IP formulation is then:<br/> | ||

+ | |||

+ | <center> | ||

+ | {| cellspacing="10" | ||

+ | |- | ||

+ | | <math>\text{min}</math> | ||

+ | | colspan="2" |<math> \sum_{n \in N} y_n</math> | ||

+ | | | ||

+ | |- | ||

+ | | <math>\text{s.t.}</math> | ||

+ | | <math> \sum_{n \in N} x_i^n \geq b_i</math> | ||

+ | | | ||

+ | |- | ||

+ | | | ||

+ | | <math> \sum_{i=1}^b l_ix_i^n \leq Ly_n</math> | ||

+ | |, | ||

+ | | <math> n \in N </math> | ||

+ | | | ||

+ | |- | ||

+ | | | ||

+ | | <math> x_i^n \in \mathbb{Z}_+</math> | ||

+ | |, | ||

+ | | <math> y_n \in \begin{Bmatrix} 0, & 1 \end{Bmatrix}</math> | ||

+ | |} | ||

+ | </center> | ||

+ | |||

+ | However, this formulation is inefficient and is difficult to solve to optimality for large numbers of variables<span style="font-size: 8pt; position:relative; bottom: 0.3em;">[3]</span>. Column generation algorithms can help solve this problem quickly by limiting the number of enumerations necessary. | ||

+ | |||

+ | ===Column Generation Formulation=== | ||

<br/> | <br/> | ||

+ | For the column generation formulation, the different patterns the rods can be cut into are the main focus<span style="font-size: 8pt; position:relative; bottom: 0.3em;">[4]</span>.<br/><br/> | ||

+ | Let <math>P</math> be the set of all patterns that can be cut.<br/> | ||

+ | Let <math>a_{ip}</math> be the number of pieces of length <math>li</math> cut in pattern p.<br/> | ||

+ | Let <math>x_p</math> be the number of times pattern <math>p</math> is cut. Then the column generation RMP and dual are:<br/> | ||

+ | |||

+ | <center> | ||

+ | {| cellspacing="10" | ||

+ | |- | ||

+ | | <math>\text{min} </math> | ||

+ | | <math>Z = \sum_{p \in P} x_p </math> | ||

+ | | | ||

+ | |- | ||

+ | | <math>\text{s.t.}</math> | ||

+ | | <math> \sum A \mathbf{x} \geq \mathbf{b} </math> | ||

+ | | | ||

+ | |- | ||

+ | | | ||

+ | | <math> x_i \geq 0</math> | ||

+ | |} | ||

<br/> | <br/> | ||

− | + | {| cellspacing="10" | |

+ | |- | ||

+ | | <math>\text{max} </math> | ||

+ | | <math> Z^{dual} = \sum \mathbf{b}^T \boldsymbol{\pi} </math> | ||

+ | | | ||

+ | |- | ||

+ | | <math> \text{s.t.}</math> | ||

+ | | <math> A^T \boldsymbol{\pi} \leq \overrightarrow{1} </math> | ||

+ | | | ||

+ | |- | ||

+ | | | ||

+ | | <math> \pi_i \geq 0 </math> | ||

+ | | | ||

+ | |} | ||

+ | </center> | ||

<br/> | <br/> | ||

+ | |||

+ | An initial set of columns must now be selected. This can be done simply by selecting “fake” columns where we know they will not end up in the solution, or by covering the basis. In this example, an identity matrix can be selected. A better basis-covering initial matrix would take <math>\left \lfloor L/l_i \right \rfloor</math> because we can always cut at least that many bars from our raw. Thus, our initial <math>A</math> matrix is: <br\> | ||

+ | <center> | ||

+ | :<math> | ||

+ | A = | ||

+ | \begin{bmatrix} | ||

+ | 5 & 0 & 0\\ | ||

+ | 0 & 2 & 0\\ | ||

+ | 0 & 0 & 2 | ||

+ | \end{bmatrix} | ||

+ | </math> | ||

+ | </center> | ||

<br/> | <br/> | ||

− | |||

− | |||

− | |||

− | |||

− | |||

− | |||

− | |||

− | |||

− | |||

− | <math> | + | Solving the dual of the RMP then yields the dual multiplier <math>\boldsymbol{\pi} = \begin{bmatrix} \frac{1}{5} & \frac{1}{2} & \frac{1}{2} \end{bmatrix}^T</math>. These values are then passed to the sub-problem to see if any columns will be added to <math>A</math>. The sub-problem is as follows: <br/> |

− | + | <center> | |

+ | {| cellspacing="10" | ||

+ | |- | ||

+ | | <math> c_r = </math> | ||

+ | | <math> z^{sub} = </math> | ||

+ | | colspan="2" |<math> \text{min} \left ( 1 - \sum_{i=1}^m \pi_i a_{ip} \right ) = </math> | ||

+ | | <math> 1 - \text{max} \left ( \sum_{i=1}^m \pi_i a_{ip} \right ) </math> | ||

+ | | | ||

+ | |- | ||

+ | | | ||

+ | | <math> \text{s.t.} </math> | ||

+ | | <math> \mathbf{l}^T \mathbf{a_p} \leq L </math> | ||

+ | | | ||

+ | |- | ||

+ | | | ||

+ | | | ||

+ | | <math> a_{ip} \in \mathbb{Z}_+ \qquad \forall i</math> | ||

+ | | | ||

+ | | | ||

+ | |} | ||

+ | </center> | ||

− | :<math> | + | This sub-problem is a knapsack problem which has been studied extensively. Dynamic programming (e.g. branch-and-bound) can be used to solve this knapsack problem [5]. At the end of this sub-problem, we will compute the reduced cost,<math>c_r</math> , to determine whether or not we add the solution column to <math> A </math>. Similar to the simplex algorithm, if the reduced cost is negative, the column is added to the RMP, otherwise we are done adding columns, and the most recent primal solution will give us our lower bound solution to the RMP. |

+ | Substituting the dual variables and other known quantities into the sub-problem gives us: <br/> | ||

+ | |||

+ | <center> | ||

+ | {| cellspacing="10" | ||

+ | |- | ||

+ | | <math> c_r = </math> | ||

+ | | <math> z^{sub} = </math> | ||

+ | | colspan="2" |<math> 1 - \text{max} \left (\frac{1}{5} a_{1p} + \frac{1}{2}a_{2p} + \frac{1}{2}a_{3p} \right ) </math> | ||

+ | | | ||

+ | |- | ||

+ | | | ||

+ | | <math> \text{s.t.} </math> | ||

+ | | <math> 3 a_{1p} + 6 a_{2p} + 7 a_{3p} \leq 16 </math> | ||

+ | | | ||

+ | |- | ||

+ | | | ||

+ | | | ||

+ | | <math> a_{ip} \in \mathbb{Z}_+ \qquad \forall i </math> | ||

+ | | | ||

+ | |} | ||

+ | </center> | ||

+ | |||

+ | The solution of which gives | ||

+ | <math>\mathbf{a_p}=\begin{bmatrix} 1 & 2 & 0\end{bmatrix}^T</math> , with a reduced cost of | ||

+ | <math> c_r = 1 - \left (\frac{1}{5} \left ( 1 \right ) + \frac{1}{2} \left ( 2 \right ) + \frac{1}{2} \left ( 0 \right ) \right ) = -\frac{1}{5} \leq 0</math>. Since this reduced cost is negative, the column, <math>\mathbf{a_p}=\begin{bmatrix} 1 & 2 & 0\end{bmatrix}^T</math> is added to <math>A</math> in the RMP, and it will replace one of the columns in the basis. After adding the column, <br/> | ||

+ | |||

+ | <center> | ||

+ | <math> A = \begin{bmatrix} | ||

+ | 5 & 0 & 0 & 1\\ | ||

+ | 0 & 2 & 0 & 2\\ | ||

+ | 0 & 0 & 2 & 0 | ||

+ | \end{bmatrix} </math> | ||

+ | </center> | ||

+ | |||

+ | Solving the dual of the new RMP then yields the dual multiplier | ||

+ | <math>\boldsymbol{\pi} = \begin{bmatrix} \frac{1}{5} & \frac{2}{5} & \frac{1}{2} \end{bmatrix}^T</math>. Again, these values are passed to the sub-problem and become the coefficients in the objective function. Solving the second iteration of the sub-problem yields <math>\mathbf{a_p}=\begin{bmatrix} 1 & 1 & 1\end{bmatrix}^T</math> with a reduced cost of | ||

+ | <math> c_r = 1 - \left (\frac{1}{5} \left ( 1 \right ) + \frac{2}{5} \left ( 1 \right ) + \frac{1}{2} \left ( 1 \right ) \right ) = -\frac{1}{10} \leq 0</math>. Since this reduced cost is negative, the column <math>\mathbf{a_p}=\begin{bmatrix} 1 & 1 & 1\end{bmatrix}^T</math> is added to <math>A</math> and the algorithm continues.<br/><br/> | ||

+ | |||

+ | The new dual multipliers become <math>\boldsymbol{\pi} = \begin{bmatrix} \frac{1}{5} & \frac{2}{5} & \frac{2}{5} \end{bmatrix}^T</math>, and after substitution into the sub-problem, we find the solution column <math>\mathbf{a_p}=\begin{bmatrix} 5 & 0 & 0\end{bmatrix}^T</math> has a reduced cost of 0. Since this is not a negative reduced cost, this column is not added to <math>A</math>, and column generation stops. The optimal solution can then be found using the most recent version of <math>A</math> and simply optimizing the RMP. The resulting solution for the RMP is | ||

+ | <math>\mathbf{x}=\begin{bmatrix} 1 \frac{1}{5} & 0 & 0 & 1 & 18\end{bmatrix}^T</math> which gives an objective value of <math>Z = \sum_{p \in P} x_p = 1 \frac{1}{5} + 1 + 18 = 20 \frac{1}{5}</math>.<br/><br/> | ||

+ | |||

+ | |||

+ | The result is the lower bound of the integer solution for the CSP, and as in this case, is often not an integer. In the case of the CSP, simply rounding up is often enough to obtain a feasible integer solution, which in this case will be 21 raws in order to fill the orders. | ||

+ | |||

+ | ==Other Examples== | ||

+ | Other applications of column generation include<span style="font-size: 8pt; position:relative; bottom: 0.3em;">[6]</span>:<br> | ||

+ | * Human resource planning <br/> | ||

+ | * Vehicle routing <br/> | ||

+ | * Air crew scheduling <br/> | ||

+ | |||

+ | All of these applications still follow the basic format of column generation. An RMP is formulated and solved, with parameters being sent to a subproblem. The subproblem is then solved and if the reduced cost of the solution is negative, the column is added to the RMP and the cycle continues until the reduced cost is nonnegative. The formulation of each problem varies due to the different parameters, but the overall approach is the same. | ||

+ | |||

+ | =Advantages and Disadvantages= | ||

+ | Column generation algorithms are best used when there are a large number of variables, but not a large number of constraints by comparison. Enumerating all possibilities when there are a large number of variables, often due to many indices, takes a long time even with efficient solution methods. Column generation algorithms solve this by limiting what is enumerated, bringing columns into the basis only when needed. When columns are brought into the basis, it is also possible to remove whatever column was replaced by the entering column, which can help save memory while enumerating solutions. Saving time and memory is where column generation algorithms shine, although they are not without their drawbacks. | ||

+ | <br/> | ||

+ | <br/> | ||

+ | One of the main disadvantages of column generation is that it may be difficult to determine whether or not a problem can be formulated so that column generation will be beneficial. It is typically easier to come up with a standard MILP model than the column generation equivalent, since the column generation formulations are not always obvious. However, once this initial hurdle is overcome, column generation is a useful tool for solving MILP problems. | ||

=Conclusion= | =Conclusion= | ||

− | Column generation algorithms are most useful when dealing with large numbers of variables. They are effective because they avoid enumerating all possible elements of a traditional MILP formulation, and instead only evaluate variables as needed. | + | Column generation algorithms are most useful when dealing with large numbers of variables. They are effective because they avoid enumerating all possible elements of a traditional MILP formulation, and instead only evaluate variables as needed. This is accomplished by bringing columns into the RMP when the reduced cost is negative. The process repeats until a nonnegative reduced cost is reached, and then the most recent primal can be solved to obtain a bound for the MILP problem. While the initial formulation of MILPs using column generation algorithms may be difficult to see at first, if a formulation can be arrived at, use of a column generation algorithm has many potential time savings. |

=References= | =References= | ||

Line 40: | Line 203: | ||

[2] Desrosiers, J., & Lübbecke, M. (2005). A Primer in Column Generation. In G. Desaulniers, J. Desrosiers & M. Solomon (Eds.), Column Generation (pp. 1-32): Springer US. | [2] Desrosiers, J., & Lübbecke, M. (2005). A Primer in Column Generation. In G. Desaulniers, J. Desrosiers & M. Solomon (Eds.), Column Generation (pp. 1-32): Springer US. | ||

− | [3] Column Generation [PDF document] | + | [3] (Nov 2012) Lecture 8: Column Generation [PDF document] Retrieved from http://ocw.nctu.edu.tw/upload/classbfs121109080773803.pdf |

+ | |||

+ | [4] Stein, C. (2007) Column Generation: Cutting Stock - A Very Applied Method [PDF Document]. Retrieved from http://www.columbia.edu/~cs2035/courses/ieor4600.S07/columngeneration.pdf | ||

− | [ | + | [5] Column Generation [PDF document]. Retrieved from http://systemsbiology.ucsd.edu/sites/default/files/Attachments/Images/classes/convex_presentations/ColGen.pdf |

− | [ | + | [6] Gan, H. (2008) Column Generation [PDF document]. Retrieved from http://www.more.ms.unimelb.edu.au/students/operationsresearch/lecturenotes/620362_ColGen.pdf |

− | [ | + | [7] Giovanni Righini. (April 2013) Column Generation [PDF document]. Retrieved from http://homes.di.unimi.it/righini/Didattica/ComplementiRicercaOperativa/MaterialeCRO/CG.pdf |

## Latest revision as of 14:24, 4 June 2015

Authors: Kedric Daly (Spring 2015)

Stewards: Dajun Yue, Fengqi You

## Contents |

# Introduction

Column generation algorithms are used for MILP problems. The formulation was initially proposed by Ford and Fulkerson in 1958[1]. The main advantage of column generation is that not all possibilities need to be enumerated. Instead, the problem is first formulated as a restricted master problem (RMP). This RMP has as few variables as possible, and new variables are brought into the basis as needed, similar to the simplex method[2]. By similar to the simplex method, it means that if a column with a negative reduced cost can be found, it is added to the RMP and this process is repeated until no more columns can be added to the RMP.

# Formulation

The formulation of the column generation problem depends on the type of problem. One common example is the cutting stock problem. However, all cases involve taking the original problem and formulating the RMP as well as a subproblem. The solution of the RMP determines some of the parameters in the subproblem whereas the subproblem will be used to determine if there are any columns which can enter the basis. The subproblem does this by solving for the minimum reduced cost. If the reduced cost is negative, the solution can enter the basis as a new column. If the reduced cost is greater than or equal to zero, the lower bound for the optimal solution has been found, although this may not be an integer solution.

# Examples

## Cutting Stock Problem (CSP)

In the cutting stock problem, the goal is to minimize the waste obtained from cutting rolls of fixed size (called "raws") while fulfilling customer orders.

For example, we may have steel rods of length L = 17m, with customer orders for twenty-five 3m length rods, twenty 5m length rods, and fifteen 9m length rods.

Let be the length a customer demands. Thus,

Let be the demand for each piece of length . Thus,

### Traditional IP formulation

The traditional integer programming formulation for the cutting stock problems involves minimizing the number of rolls that are cut in order to meet demand constraints as well as an overall size constraint.

Let be the index of available rolls.

Let be 1 if roll is cut, and 0 otherwise.

Let be the number of times item is cut on roll .

The IP formulation is then:

, | ||||

, |

However, this formulation is inefficient and is difficult to solve to optimality for large numbers of variables[3]. Column generation algorithms can help solve this problem quickly by limiting the number of enumerations necessary.

### Column Generation Formulation

For the column generation formulation, the different patterns the rods can be cut into are the main focus[4].

Let be the set of all patterns that can be cut.

Let be the number of pieces of length cut in pattern p.

Let be the number of times pattern is cut. Then the column generation RMP and dual are:

An initial set of columns must now be selected. This can be done simply by selecting “fake” columns where we know they will not end up in the solution, or by covering the basis. In this example, an identity matrix can be selected. A better basis-covering initial matrix would take because we can always cut at least that many bars from our raw. Thus, our initial matrix is:

Solving the dual of the RMP then yields the dual multiplier . These values are then passed to the sub-problem to see if any columns will be added to . The sub-problem is as follows:

This sub-problem is a knapsack problem which has been studied extensively. Dynamic programming (e.g. branch-and-bound) can be used to solve this knapsack problem [5]. At the end of this sub-problem, we will compute the reduced cost, , to determine whether or not we add the solution column to . Similar to the simplex algorithm, if the reduced cost is negative, the column is added to the RMP, otherwise we are done adding columns, and the most recent primal solution will give us our lower bound solution to the RMP.
Substituting the dual variables and other known quantities into the sub-problem gives us:

The solution of which gives
, with a reduced cost of
. Since this reduced cost is negative, the column, is added to in the RMP, and it will replace one of the columns in the basis. After adding the column,

Solving the dual of the new RMP then yields the dual multiplier
. Again, these values are passed to the sub-problem and become the coefficients in the objective function. Solving the second iteration of the sub-problem yields with a reduced cost of
. Since this reduced cost is negative, the column is added to and the algorithm continues.

The new dual multipliers become , and after substitution into the sub-problem, we find the solution column has a reduced cost of 0. Since this is not a negative reduced cost, this column is not added to , and column generation stops. The optimal solution can then be found using the most recent version of and simply optimizing the RMP. The resulting solution for the RMP is
which gives an objective value of .

The result is the lower bound of the integer solution for the CSP, and as in this case, is often not an integer. In the case of the CSP, simply rounding up is often enough to obtain a feasible integer solution, which in this case will be 21 raws in order to fill the orders.

## Other Examples

Other applications of column generation include[6]:

- Human resource planning

- Vehicle routing

- Air crew scheduling

All of these applications still follow the basic format of column generation. An RMP is formulated and solved, with parameters being sent to a subproblem. The subproblem is then solved and if the reduced cost of the solution is negative, the column is added to the RMP and the cycle continues until the reduced cost is nonnegative. The formulation of each problem varies due to the different parameters, but the overall approach is the same.

# Advantages and Disadvantages

Column generation algorithms are best used when there are a large number of variables, but not a large number of constraints by comparison. Enumerating all possibilities when there are a large number of variables, often due to many indices, takes a long time even with efficient solution methods. Column generation algorithms solve this by limiting what is enumerated, bringing columns into the basis only when needed. When columns are brought into the basis, it is also possible to remove whatever column was replaced by the entering column, which can help save memory while enumerating solutions. Saving time and memory is where column generation algorithms shine, although they are not without their drawbacks.

One of the main disadvantages of column generation is that it may be difficult to determine whether or not a problem can be formulated so that column generation will be beneficial. It is typically easier to come up with a standard MILP model than the column generation equivalent, since the column generation formulations are not always obvious. However, once this initial hurdle is overcome, column generation is a useful tool for solving MILP problems.

# Conclusion

Column generation algorithms are most useful when dealing with large numbers of variables. They are effective because they avoid enumerating all possible elements of a traditional MILP formulation, and instead only evaluate variables as needed. This is accomplished by bringing columns into the RMP when the reduced cost is negative. The process repeats until a nonnegative reduced cost is reached, and then the most recent primal can be solved to obtain a bound for the MILP problem. While the initial formulation of MILPs using column generation algorithms may be difficult to see at first, if a formulation can be arrived at, use of a column generation algorithm has many potential time savings.

# References

[1] L. R. Ford, Jr., D. R. Fulkerson, (1958) A Suggested Computation for Maximal Multi-Commodity Network Flows. Management Science 5(1):97-101. http://dx.doi.org/10.1287/mnsc.5.1.97

[2] Desrosiers, J., & Lübbecke, M. (2005). A Primer in Column Generation. In G. Desaulniers, J. Desrosiers & M. Solomon (Eds.), Column Generation (pp. 1-32): Springer US.

[3] (Nov 2012) Lecture 8: Column Generation [PDF document] Retrieved from http://ocw.nctu.edu.tw/upload/classbfs121109080773803.pdf

[4] Stein, C. (2007) Column Generation: Cutting Stock - A Very Applied Method [PDF Document]. Retrieved from http://www.columbia.edu/~cs2035/courses/ieor4600.S07/columngeneration.pdf

[5] Column Generation [PDF document]. Retrieved from http://systemsbiology.ucsd.edu/sites/default/files/Attachments/Images/classes/convex_presentations/ColGen.pdf

[6] Gan, H. (2008) Column Generation [PDF document]. Retrieved from http://www.more.ms.unimelb.edu.au/students/operationsresearch/lecturenotes/620362_ColGen.pdf

[7] Giovanni Righini. (April 2013) Column Generation [PDF document]. Retrieved from http://homes.di.unimi.it/righini/Didattica/ComplementiRicercaOperativa/MaterialeCRO/CG.pdf