Overview: Optimization

This section introduces some of the mathematical concepts used with IDL Advanced Math and Stats.

Unconstrained Minimization

The unconstrained minimization problem can be stated as follows:

Overview_Optimization-04.jpg

where f : RnR is continuous and has derivatives of all orders required by the algorithms. The functions for unconstrained minimization are grouped into three categories: univariate functions, multivariate functions, and nonlinear least-squares functions.

For the univariate functions, it is assumed that the function is unimodal within the specified interval. For discussion on unimodality, see Brent (1973).

A quasi-Newton method is used for the multivariate IMSL_FMINV function. The default is to use a finite-difference approximation of the gradient of f(x). Here, the gradient is defined to be the following vector:

Overview_Optimization-05.jpg

When the exact gradient can be easily provided, the grad argument should be used.

The nonlinear least-squares function uses a modified Levenberg-Marquardt algorithm. The most common application of the function is the nonlinear data-fitting problem where the user is trying to fit the data with a nonlinear model.

These functions are designed to find only a local minimum point. However, a function may have many local minima. Try different initial points and intervals to obtain a better local solution.

Double-precision arithmetic is recommended for the functions when the user provides only the function values.

Linearly Constrained Minimization

The linearly constrained minimization problem can be stated as follows:

Overview_Optimization-06.jpg

subject to:

Overview_Optimization-07.jpg

where f : RnR, A1 and A2 are coefficient matrices and b1 and b2 are vectors. If f(x) is linear, then the problem is a linear programming problem; if f(x) is quadratic, the problem is a quadratic programming problem.

The IMSL_LINPROG function uses a revised simplex method to solve small- to medium-sized linear programming problems. No sparsity is assumed since the coefficients are stored in full matrix form.

The IMSL_QUADPROG function is designed to solve convex quadratic programming problems using a dual quadratic programming algorithm. If the given Hessian is not positive definite, then IMSL_QUADPROG modifies it to be positive definite. In this case, output should be interpreted with care because the problem has been changed slightly. Here, the Hessian of f(x) is defined to be the n x n matrix as follows:

Overview_Optimization-08.jpg

Nonlinearly Constrained Minimization

The nonlinearly constrained minimization problem can be stated as follows:

Overview_Optimization-09.jpg

subject to:

Overview_Optimization-10.jpg  Overview_Optimization-11.jpg  Overview_Optimization-12.jpg

Overview_Optimization-13.jpg  Overview_Optimization-14.jpg  Overview_Optimization-15.jpg

where f : RnR and gi : RnR for   i = 1, 2, ..., m .

The routine IMSL_CONSTRAINED_NLP uses a sequential equality constrained quadratic programming method. A more complete discussion of this algorithm is in IMSL_CONSTRAINED_NLP.