IMSL_CONSTRAINED_NLP

Syntax | Return Value | Arguments | Keywords | Description | Example | Version History

The IMSL_CONSTRAINED_NLP function solves a general nonlinear programming problem using a sequential equality constrained quadratic programming method.

Note
This routine requires an IDL Advanced Math and Stats license. For more information, contact your ITT Visual Information Solutions sales or technical support representative.

Syntax

Result = IMSL_CONSTRAINED_NLP (f, m, n [, /DOUBLE] [, DEL0=value] [, DELMIN=string] [, DIFFTYPE=value] [, EPSDIF=value] [, EPSFCN=value] [, GRAD=value] [, IBTYPE=string] [, ITMAX=value] [, MEQ=value] [, OBJ=value] [, SCFMAX=string] [, SMALLW=string] [, TAU0=value] [, TAUBND=value] [, XGUESS=array] [, XLB=variable] [, XSCALE=vector] [, XUB=variable])

Return Value

The solution of the nonlinear programming problem.

Arguments

f

Scalar string specifying a user-supplied procedure to evaluate the objective function and constraints at a given point. The input parameters are:

m

Total number of constraints.

n

Number of variables.

Keywords

DOUBLE

If present and nonzero, double precision is used.

DEL0

In the initial phase of minimization, a constraint is considered binding if:

IMSL_CONSTRAINED_NLP-44.jpg

Good values are between .01 and 1.0. If DEL0 is too small, then identification of the correct set of binding constraints may be delayed. Conversely, if DEL0 is too large, then the method will often escape to the full regularized SQP method. This method uses individual slack variables for any active constraint, which is quite costly. For well-scaled problems DEL0 = 1.0 is reasonable. Default: DEL0 = .5* Tau0.

DELMIN

Scalar which defines allowable constraint violations of the final accepted result. Constraints are satisfied if |gi(x)| is less than or equal to DELMIN, and gi(x) is greater than or equal to (-Delmin) respectively. Default: DELMIN = min(Del0/10, max(epsdif, min(del0/10, max(1.E-6* del0, smallw))

DIFFTYPE

Type of numerical differentiation to be used. Default: DIFFTYPE = 1

This keyword is not valid if the keyword GRAD is supplied.

EPSDIF

Relative precision in gradients. Default: EPSDIF = eps where eps is the machine precision. This keyword is not valid if the keyword GRAD is supplied.

EPSFCN

Relative precision of the function evaluation routine. Default: EPSFCN = eps where eps is the machine precision. This keyword is not valid if the keyword GRAD is supplied.

GRAD

Scalar string specifying a user-supplied procedure to evaluate the gradients at a given point. The procedure specified by GRAD has the following parameters:

IBTYPE

Scalar indicating the types of bounds on variables.

ITMAX

Maximum number of iterations allowed. Default: ITMAX = 200

MEQ

Number of equality constraints. Default: MEQ = m

OBJ

Name of a variable into which a scalar containing the value of the objective function at the computed solution is stored.

SCFMAX

Scalar containing the bound for the internal automatic scaling of the objective function. Default: SCFMAX = 1.0e4

SMALLW

Scalar containing the error allowed in the multipliers. For example, a negative multiplier of an inequality constraint is accepted (as zero) if its absolute value is less than SMALLW. Default: SMALLW = exp(2*log(eps/3)) where eps is the machine precision.

TAU0

A universal bound describing how much the unscaled penalty-term may deviate from zero.

IMSL_CONSTRAINED_NLP assumes that within the region described by:

IMSL_CONSTRAINED_NLP-45.jpg

all functions may be evaluated safely. The initial guess, however, may violate these requirements. In that case, an initial feasibility improvement phase is run by IMSL_CONSTRAINED_NLP until such a point is found. A small TAU0 diminishes the efficiency of IMSL_CONSTRAINED_NLP, because the iterates then will follow the boundary of the feasible set closely. Conversely, a large TAU0 may degrade the reliability of the code. Default TAU0 = 1.0

TAUBND

Amount by which bounds may be violated during numerical differentiation. Bounds are violated by TAUBND (at most) only if a variable is on a bound and finite differences are taken for gradient evaluations. This keyword is not valid if the keyword GRAD is supplied. Default: TAUBND = 1.0.

XGUESS

Array with n components containing an initial guess of the computed solution. Default: XGUESS = X, with the smallest value of IMSL_CONSTRAINED_NLP-46.jpg that satisfies the bounds.

XLB

Named variable, containing a one-dimensional array with n components, containing the lower bounds on the variables. (Input, if IBTYPE = 0; Output, if IBTYPE = 1 or 2; Input/Output, if IBTYPE = 3). If there is no lower bound on a variable, the corresponding XLB value should be set to negative machine infinity. Default: no lower bounds are enforced on the variables

XSCALE

Vector of length n setting the internal scaling of the variables. The initial value given and the objective function and gradient evaluations however are always in the original unscaled variables. The first internal variable is obtained by dividing values x(I) by XSCALE(I). This keyword is not valid if the keyword GRAD is supplied.

In the absence of other information, set all entries to 1.0. Default: XSCALE(*) = 1.0.

XUB

Named variable, containing a one-dimensional array with n components, containing the upper bounds on the variables. (Input, if IBTYPE = 0; Output, if IBTYPE = 1 or 2; Input/Output, if IBTYPE = 3). If there is no upper bound on a variable, the corresponding XUB value should be set to positive machine infinity. Default: no upper bounds are enforced on variables.

Description

The routine IMSL_CONSTRAINED_NLP provides an interface to a licensed version of subroutine DONLP2, a code developed by Peter Spellucci (1998). It uses a sequential equality constrained quadratic programming method with an active set technique, and an alternative usage of a fully regularized mixed constrained subproblem in case of nonregular constraints (for example, linear dependent gradients in the "working sets"). It uses a slightly modified version of the Pantoja-Mayne update for the Hessian of the Lagrangian, variable dual scaling and an improved Armjijo-type stepsize algorithm. Bounds on the variables are treated in a gradient-projection like fashion. Details may be found in the following two papers:

The problem is stated as follows:

IMSL_CONSTRAINED_NLP-47.jpg

subject to:

IMSL_CONSTRAINED_NLP-48.jpg, IMSL_CONSTRAINED_NLP-49.jpg IMSL_CONSTRAINED_NLP-50.jpg

IMSL_CONSTRAINED_NLP-51.jpg, IMSL_CONSTRAINED_NLP-52.jpg  IMSL_CONSTRAINED_NLP-53.jpg

IMSL_CONSTRAINED_NLP-54.jpg

Although default values are provided for input keywords, it may be necessary to adjust these values for some problems. Through the use of keywords, IMSL_CONSTRAINED_NLP allows for several parameters of the algorithm to be adjusted to account for specific characteristics of problems. The DONLP2 Users Guide provides detailed descriptions of these parameters as well as strategies for maximizing the performance of the algorithm. The DONLP2 Users Guide is available in the "manuals" subdirectory of the main product installation directory. In addition, the following are guidelines to consider when using IMSL_CONSTRAINED_NLP.

Example

The problem:

min F(x) = (x1 – 2)2 + (x2 – 1)2

subject to:

g1(x) = x1 – 2x2 + 1 = 0

g2(x) = –x21 /4 – x22 + 1 ≥ 0

is solved first with finite difference gradients, then with analytic gradients.

PRO Nlp_grad, x, iact, result 
   CASE iact OF 
      0:result = [ 2 * (x(0) - 2.), 2 * (x(1)-1.)] 
      1:result = [1., -2. ] 
      2:result = [-0.5*x(0), -2.0*x(1)] 
   ENDCASE 
   RETURN 
END 
 
PRO Nlp_fcn, x, iact, result, ierr 
   tmp1 = x(0)-2. 
   tmp2 = x(1) - 1. 
   CASE iact OF 
      0:result = tmp1^2 + tmp2^2 
      1:result = x(0) -2.*x(1) + 1. 
      2:result = -(x(0)^2)/4. - x(1)^2 + 1. 
   ENDCASE 
   ierr = 0 
END 
 
; Ex #1, Finite difference gradients 
ans1 = IMSL_CONSTRAINED_NLP('nlp_fcn', 2, 2, MEQ = 1) 
PM, ans1, title='X with finite difference gradient' 
    
; Ex #2, Analytic gradients 
ans2 = IMSL_CONSTRAINED_NLP('nlp_fcn', 2, 2, MEQ = 1, $ 
   GRAD = 'nlp_grad') 
PM, ans2, title='X with Analytic gradient' 

Output

X with finite difference gradient 
     0.822877 
     0.911439 
X with Analytic gradient 
     0.822877 
     0.911438 

Version History

6.4

Introduced