# Nondifferentiable Optimization

Author Name: Nathanael Robinson
Steward: Dajun Yue and Fenqui You

# Background

## Introduction

Non-differentiable optimization is a category of optimization that deals with objective that for a variety of reasons is non differentiable and thus non-convex. The functions in this class of optimization are generally non-smooth. These functions although continuous often contain sharp points or corners that do not allow for the solution of a tangent and are thus non-differentiable. In practice non-differentiable optimization encompasses a large variety of problems and a single one-size fits all solution is not applicable however solution is often reached through implementation of the subgradient method. Non-differentiable functions often arise in real world applications and commonly in the field of economics where cost functions often include sharp points. Early work in the optimization of non-differentiable functions was started by Soviet scientists Dubovitskii and Milyutin in the 1960's and led to continued research by Soviet Scientists. The subject has been a continued field of study since with different theories and methods being applied to solution in different cases.

## Cost Functions

In many cases, particularly economics the cost function which is the objective function of an optimization problem is non-differentiable. These non-smooth cost functions may include discontinuities and discontinuous gradients and are often seen in discontinuous physical processes. Optimal solution of these cost functions is a matter of importance to economists but presents a variety of issues when using numerical methods thus leading to the need for special solution methods.

An example of a non-differentiable cost function such as one that may be seen in economics

# Solution Methods

Solution of differentiable problems and differentiable cost functions can in general forms be solved with gradient based analytical methods such as the Kuhn-Tucker model and through numerical methods such as steepest descent and conjugate gradient. However the introduction of non-differentiable points in the function invalidates these methods, steepest descent cannot be calculated for a vertical line. A common method for solution of a non-differentiable cost function is through transformation into a non-linear programming model where all of the of new functions involved are differentiable such that solution is now possible through ordinary means.

## Simple Kink Case

An example of a two parameter kink approximation.

A common case of a non-differentiable function is the simple kink. The function is of the form:
$Min \quad f(x)$
$S.t. \quad x \in Q \subset R^n$

The function $f(x)$ is non-differentiable because of several simple kinks which can be modeled by:
$\gamma [f_i(x)] = max\{0,f_i(x)\} \qquad i \in I$

If these simple kinks were removed the function would be differentiable across the entire domain. Some other types of non-differentiable objective functions can be modeled as simple kinks to allow the same type of solution.
The approach to solution of the simple kink case is to approximate each of the non-differentiable kinks with a smooth function that will allow conventional solution to the entire problem. This requires that the kinks be the only factor that renders the function non-differentiable. A simple kink can be modeled by a two-parameter approximation,$\tilde{\gamma}[f(x), y, c]$, of the simple kink $\gamma [f(x)]$

$\tilde{\gamma}[f(x), y, c] = \begin{cases} f(x) - (1-y)^2 /2c, & \text{if } (1-y)/c \le f(x), \\ yf(x) + \tfrac{1}{2}c[f(x)]^2, & \text{if } -y/c \le f(x) \le (1-y)/c \\ -y^2/2c, & \text{if } f(x) \le -y/c \end{cases}$
Where y and c are parameters with $0 \le y \le 1, 0 < c$

Each kink $\gamma_i$ will be replaced in the function with its two-parameter approximation such the new $\tilde{f(x)}$ function is differentiable with the parameters $c >0$ and $0. The solution can now be iteratively solved by adjusting the parameters c and y and solving the optimization problem
$Min \quad \tilde{f(x)}$
$s.t. \quad x \in Q \subset X$

A solution $x_k$ to the approximated objective function will be obtained. The problem is now resolved with an updated parameter for $c$ which is obtained by multiplying $\beta c_k$ which $= c_{k+1}$ where $\beta > 0. \quad y_{k+1}$ can also be updated if necessary. And a new minimization carried out with the $k+1$ case. The procedure can be repeated until a value of $f(x)$ that is consistent with the $c$ and $y$ parameters is reached.

## $\varepsilon$-Subgradient Method

If the non-differentiable function is convex and subject to convex constraints then the use of the $\varepsilon$-Subgradient Method can be applied. This method is a descent algorithm which can be applied to minimization optimization problems given that they are convex.
With this method the constraints won't be considered explicitly but rather the objective function will be minimized to the value $+ \infty$. This makes it such that the minimization of $g(.)$ over set $X$ is equal to finding the minimum of the extended real value function $f(x) = g(x) + \delta(x|X)$ where $\delta(.|X)$ is the indicator function of $X$. The solution will converge through a 4 step system, the basis of these steps lies a series of propositions which are further detailed in [1].
Step 1: Select a vector $x_{\circ}$ such that $f(x_{\circ}) < \infty$, a scalar $\varepsilon_{\circ} > 0$ and a scalar $a, 0 < a < 1$ .
Step 2: Given $x_n, \quad \varepsilon_n > 0,$ set $\varepsilon_{n+1} = a^k\varepsilon_n ,$ where $k$ is the smallest non-negative integer such that $0 \not\in \delta_{\varepsilon_{n+1}}f(x^n)$
Step 3: Find a vector $y_n$ such that
\begin{align} sup \quad & \langle y_n , x^* \rangle < 0 \\ x^* \in \delta_{\varepsilon_{n+1}}f(x) & \\ \end{align}
Step 4: Set $x_{n+1} = x_n + \lambda_n y_n,$ where $\lambda_n > 0$ is such that
$f(x_n) - f(x_{n+1}) > \varepsilon_{n+1}$
Return to step 2 to iterate until convergence. This method is not only guaranteed to converge but progress towards convergence is made with each iteration.

## Cutting Plane Methods

Cutting planes were first utilized for the convergence of convex non-differentiable equations. The application of cutting planes will use the subgradient inequality to change the function $f$ by approximating it as
$f(x) \cong \tfrac{max}{i \in I} f(x_i) + \xi_i^T(x-x_i)$

Where $\xi_i^f , \quad i \in I$ are subgradients of $f$ at $x_i, \quad i \in I$. Thus, The original problem is now formulated as

$\tfrac{min}{x} \{ \tfrac{max}{i \in I} f(x_i) + \xi_i^T(x-x_i) \}$
Which is equivalent to the new problem

$Min \quad v$
$s.t. \quad f(x_i) + \xi_i^T(x-x_i) \le v \quad \forall i \in I$

This new minimization formulation is now differentiable and easier to deal with, however it is only an approximation of the original equation which will become a better approximation as more constraints are added to the new model.

A simple example of non-differentiable optimization is approximation of a kink origination from an absolute value function. The simple function $f = |x|$ is an example of a function that while continuous for an infinite domain is non-differentiable at $f(x) = 0$ due to the presence of a "kink" or point that will not allow for the solution of a tangent. Since the non-differentiable point of the function is known an approximation can be added to relax and smooth the function with parameter $t$. This new approximation can be modeled
$\begin{cases} -x & x \ge t, \\ \tfrac{x^2}{t} & -t \le x \le t, \\ x & x \ge t, \\ \end{cases}$