Reference

Contents

Index

NLPModelsTest.BNDROSENBROCKType
nls = BNDROSENBROCK()

Rosenbrock function in nonlinear least squares format with bound constraints.

\[\begin{aligned} \min \quad & \tfrac{1}{2}\| F(x) \|^2 \text{s. to} \quad & -1 \leq x_1 \leq 0.8 \\ & -2 \leq x_2 \leq 2 \end{aligned}\]

where

\[F(x) = \begin{bmatrix} 1 - x_1 \\ 10 (x_2 - x_1^2) \end{bmatrix}.\]

Starting point: [-1.2; 1].

source
NLPModelsTest.BROWNDENType
nlp = BROWNDEN()

Brown and Dennis function.

Source: Problem 16 in
J.J. Moré, B.S. Garbow and K.E. Hillstrom,
"Testing Unconstrained Optimization Software",
ACM Transactions on Mathematical Software, vol. 7(1), pp. 17-41, 1981

classification SUR2-AN-4-0

\[\min_x \ \sum_{i=1}^{20} \left(\left(x_1 + \tfrac{i}{5} x_2 - e^{i / 5}\right)^2 + \left(x_3 + \sin(\tfrac{i}{5}) x_4 - \cos(\tfrac{i}{5})\right)^2\right)^2\]

Starting point: [25.0; 5.0; -5.0; -1.0]

source
NLPModelsTest.HS10Type
nlp = HS10()

Problem 10 in the Hock-Schittkowski suite

\[\begin{aligned} \min \quad & x_1 - x_2 \\ \text{s. to} \quad & -3x_1^2 + 2x_1 x_2 - x_2^2 + 1 \geq 0 \end{aligned}\]

Starting point: [-10; 10].

source
NLPModelsTest.HS11Type
nlp = HS11()

Problem 11 in the Hock-Schittkowski suite

\[\begin{aligned} \min \quad & (x_1 - 5)^2 + x_2^2 - 25 \\ \text{s. to} \quad & 0 \leq -x_1^2 + x_2 \end{aligned}\]

Starting point: [-4.9; 0.1].

source
NLPModelsTest.HS13Type
nlp = HS13()

Problem 13 in the Hock-Schittkowski suite

\[\begin{aligned} \min \quad & (x_1 - 2)^2 + x_2^2 \\ \text{s. to} \quad & (1 - x_1)^3 - x_2 \geq 0 \quad & 0 \leq x_1 \\ & 0 \leq x_2 \end{aligned}\]

Starting point: [-2; -2].

source
NLPModelsTest.HS14Type
nlp = HS14()

Problem 14 in the Hock-Schittkowski suite

\[\begin{aligned} \min \quad & (x_1 - 2)^2 + (x_2 - 1)^2 \\ \text{s. to} \quad & x_1 - 2x_2 = -1 \\ & -\tfrac{1}{4} x_1^2 - x_2^2 + 1 \geq 0 \end{aligned}\]

Starting point: [2; 2].

source
NLPModelsTest.HS5Type
nlp = HS5()

Problem 5 in the Hock-Schittkowski suite

\[\begin{aligned} \min \quad & \sin(x_1 + x_2) + (x_1 - x_2)^2 - \tfrac{3}{2}x_1 + \tfrac{5}{2}x_2 + 1 \\ \text{s. to} \quad & -1.5 \leq x_1 \leq 4 \\ & -3 \leq x_2 \leq 3 \end{aligned}\]

Starting point: [0.0; 0.0].

source
NLPModelsTest.HS6Type
nlp = HS6()

Problem 6 in the Hock-Schittkowski suite

\[\begin{aligned} \min \quad & (1 - x_1)^2 \\ \text{s. to} \quad & 10 (x_2 - x_1^2) = 0 \end{aligned}\]

Starting point: [-1.2; 1.0].

source
NLPModelsTest.LINCONType
nlp = LINCON()

Linearly constrained problem

\[\begin{aligned} \min \quad & (i + x_i^4) \\ \text{s. to} \quad & x_{15} = 0 \\ & x_{10} + 2x_{11} + 3x_{12} \geq 1 \\ & x_{13} - x_{14} \leq 16 \\ & -11 \leq 5x_8 - 6x_9 \leq 9 \\ & -2x_7 = -1 \\ & 4x_6 = 1 \\ & x_1 + 2x_2 \geq -5 \\ & 3x_1 + 4x_2 \geq -6 \\ & 9x_3 \leq 1 \\ & 12x_4 \leq 2 \\ & 15x_5 \leq 3 \end{aligned}\]

Starting point: zeros(15).

source
NLPModelsTest.LINSVType
nlp = LINSV()

Linear problem

\[\begin{aligned} \min \quad & x_1 \\ \text{s. to} \quad & x_1 + x_2 \geq 3 \\ & x_2 \geq 1 \end{aligned}\]

Starting point: [0; 0].

source
NLPModelsTest.LLSType
nls = LLS()

Linear least squares

\[\begin{aligned} \min \quad & \tfrac{1}{2}\| F(x) \|^2 \\ \text{s. to} \quad & x_1 + x_2 \geq 0 \end{aligned}\]

where

\[F(x) = \begin{bmatrix} x_1 - x_2 \\ x_1 + x_2 - 2 \\ x_2 - 2 \end{bmatrix}.\]

Starting point: [0; 0].

source
NLPModelsTest.MGH01Type
nls = MGH01()

Rosenbrock function in nonlinear least squares format

Source: Problem 1 in
J.J. Moré, B.S. Garbow and K.E. Hillstrom,
"Testing Unconstrained Optimization Software",
ACM Transactions on Mathematical Software, vol. 7(1), pp. 17-41, 1981

\[\begin{aligned} \min \quad & \tfrac{1}{2}\| F(x) \|^2 \end{aligned}\]

where

\[F(x) = \begin{bmatrix} 1 - x_1 \\ 10 (x_2 - x_1^2) \end{bmatrix}.\]

Starting point: [-1.2; 1].

source
NLPModelsTest.MGH01FeasType
nlp = MGH01Feas()

Rosenbrock function in feasibility format

Source: Problem 1 in
J.J. Moré, B.S. Garbow and K.E. Hillstrom,
"Testing Unconstrained Optimization Software",
ACM Transactions on Mathematical Software, vol. 7(1), pp. 17-41, 1981

\[\begin{aligned} \min \quad & 0 \\ \text{s. to} \quad & x_1 = 1 \\ & 10 (x_2 - x_1^2) = 0. \end{aligned}\]

Starting point: [-1.2; 1].

source
NLPModelsTest.NLSHS20Type
nls = NLSH20()

Problem 20 in the Hock-Schittkowski suite in nonlinear least squares format

\[\begin{aligned} \min \quad & \tfrac{1}{2}\| F(x) \|^2 \\ \text{s. to} \quad & x_1 + x_2^2 \geq 0 \\ & x_1^2 + x_2 \geq 0 \\ & x_1^2 + x_2^2 -1 \geq 0 \\ & -0.5 \leq x_1 \leq 0.5 \end{aligned}\]

where

\[F(x) = \begin{bmatrix} 1 - x_1 \\ 10 (x_2 - x_1^2) \end{bmatrix}.\]

Starting point: [-2; 1].

source
NLPModelsTest.NLSLCType
nls = NLSLC()

Linearly constrained nonlinear least squares problem

\[\begin{aligned} \min \quad & \tfrac{1}{2}\| F(x) \|^2 \\ \text{s. to} \quad & x_{15} = 0 \\ & x_{10} + 2x_{11} + 3x_{12} \geq 1 \\ & x_{13} - x_{14} \leq 16 \\ & -11 \leq 5x_8 - 6x_9 \leq 9 \\ & -2x_7 = -1 \\ & 4x_6 = 1 \\ & x_1 + 2x_2 \geq -5 \\ & 3x_1 + 4x_2 \geq -6 \\ & 9x_3 \leq 1 \\ & 12x_4 \leq 2 \\ & 15x_5 \leq 3 \end{aligned}\]

where

\[F(x) = \begin{bmatrix} x_1^2 - 1 \\ x_2^2 - 2^2 \\ \vdots \\ x_{15}^2 - 15^2 \end{bmatrix}\]

Starting point: zeros(15).

source
NLPModelsTest.check_nlp_dimensionsMethod
check_nlp_dimensions(nlp; exclude = [ghjvprod])

Make sure NLP API functions will throw DimensionError if the inputs are not the correct dimension. To make this assertion in your code use

@lencheck size input [more inputs separated by spaces]
source
NLPModelsTest.check_nls_dimensionsMethod
check_nls_dimensions(nlp; exclude = [])

Make sure NLS API functions will throw DimensionError if the inputs are not the correct dimension. To make this assertion in your code use

@lencheck size input [more inputs separated by spaces]
source
NLPModelsTest.consistent_nlpsMethod
consistent_nlps(nlps; exclude=[], rtol=1e-8)

Check that the all nlps of the vector nlps are consistent, in the sense that

  • Their counters are the same.
  • Their meta information is the same.
  • The API functions return the same output given the same input.

In other words, if you create two models of the same problem, they should be consistent.

The keyword exclude can be used to pass functions to be ignored, if some of the models don't implement that function.

source
NLPModelsTest.consistent_nlssMethod
consistent_nlss(nlps; exclude=[hess, hprod, hess_coord])

Check that the all nlss of the vector nlss are consistent, in the sense that

  • Their counters are the same.
  • Their meta information is the same.
  • The API functions return the same output given the same input.

In other words, if you create two models of the same problem, they should be consistent.

By default, the functions hess, hprod and hess_coord (and therefore associated functions) are excluded from this check, since some models don't implement them.

source
NLPModelsTest.coord_memory_nlpMethod
coord_memory_nlp(nlp; exclude = [])

Check that the allocated memory for in place coord methods is sufficiently smaller than their allocating counter parts.

source
NLPModelsTest.gradient_checkMethod
gradient_check(nlp; x=nlp.meta.x0, atol=1e-6, rtol=1e-4)

Check the first derivatives of the objective at x against centered finite differences.

This function returns a dictionary indexed by components of the gradient for which the relative error exceeds rtol.

source
NLPModelsTest.hessian_checkMethod
hessian_check(nlp; x=nlp.meta.x0, atol=1e-6, rtol=1e-4, sgn=1)

Check the second derivatives of the objective and each constraints at x against centered finite differences. This check does not rely on exactness of the first derivatives, only on objective and constraint values.

The sgn arguments refers to the formulation of the Lagrangian in the problem. It should have a positive value if the Lagrangian is formulated as

\[L(x,y) = f(x) + \sum_j yⱼ cⱼ(x),\]

and a negative value if the Lagrangian is formulated as

\[L(x,y) = f(x) - \sum_j yⱼ cⱼ(x).\]

Only the sign of sgn is important.

This function returns a dictionary indexed by functions. The 0-th function is the objective while the k-th function (for k > 0) is the k-th constraint. The values of the dictionary are dictionaries indexed by tuples (i, j) such that the relative error in the second derivative ∂²fₖ/∂xᵢ∂xⱼ exceeds rtol.

source
NLPModelsTest.hessian_check_from_gradMethod
hessian_check_from_grad(nlp; x=nlp.meta.x0, atol=1e-6, rtol=1e-4, sgn=1)

Check the second derivatives of the objective and each constraints at x against centered finite differences. This check assumes exactness of the first derivatives.

The sgn arguments refers to the formulation of the Lagrangian in the problem. It should have a positive value if the Lagrangian is formulated as

\[L(x,y) = f(x) + \sum_j yⱼ cⱼ(x),\]

and a negative value if the Lagrangian is formulated as

\[L(x,y) = f(x) - \sum_j yⱼ cⱼ(x).\]

Only the sign of sgn is important.

This function returns a dictionary indexed by functions. The 0-th function is the objective while the k-th function (for k > 0) is the k-th constraint. The values of the dictionary are dictionaries indexed by tuples (i, j) such that the relative error in the second derivative ∂²fₖ/∂xᵢ∂xⱼ exceeds rtol.

source
NLPModelsTest.jacobian_checkMethod
jacobian_check(nlp; x=nlp.meta.x0, atol=1e-6, rtol=1e-4)

Check the first derivatives of the constraints at x against centered finite differences.

This function returns a dictionary indexed by (j, i) tuples such that the relative error in the i-th partial derivative of the j-th constraint exceeds rtol.

source
NLPModelsTest.multiple_precision_nlpMethod
multiple_precision_nlp(nlp_from_T; precisions=[...], exclude = [ghjvprod])

Check that the NLP API functions output type are the same as the input. In other words, make sure that the model handles multiple precisions.

The input nlp_from_T is a function that returns an nlp from a type T. The array precisions are the tested floating point types. Defaults to [Float16, Float32, Float64, BigFloat].

source
NLPModelsTest.multiple_precision_nlp_arrayMethod
multiple_precision_nlp_array(nlp_from_T, ::Type{S}; precisions=[Float16, Float32, Float64])

Check that the NLP API functions output type are the same as the input. It calls multiple_precision_nlp on problem type T -> nlp_from_T(S(T)).

The array precisions are the tested floating point types. Note that BigFloat is not tested by default, because it is not supported by CuArray.

source
NLPModelsTest.multiple_precision_nlsMethod
multiple_precision_nls(nls_from_T; precisions=[...], exclude = [])

Check that the NLS API functions output type are the same as the input. In other words, make sure that the model handles multiple precisions.

The input nls_from_T is a function that returns an nls from a type T. The array precisions are the tested floating point types. Defaults to [Float16, Float32, Float64, BigFloat].

source
NLPModelsTest.multiple_precision_nls_arrayMethod
multiple_precision_nls_array(nlp_from_T, ::Type{S}; precisions=[Float16, Float32, Float64])

Check that the NLS API functions output type are the same as the input. It calls multiple_precision_nls on problem type T -> nlp_from_T(S(T)).

The array precisions are the tested floating point types. Note that BigFloat is not tested by default, because it is not supported by CuArray.

source
NLPModelsTest.print_nlp_allocationsMethod
print_nlp_allocations([io::IO = stdout], nlp::AbstractNLPModel, table::Dict; only_nonzeros::Bool = false)
print_nlp_allocations([io::IO = stdout], nlp::AbstractNLPModel; kwargs...)

Print in a convenient way the result of test_allocs_nlpmodels(nlp).

The keyword arguments may contain:

  • only_nonzeros::Bool: shows only non-zeros if true.
  • linear_api::Bool: checks the functions specific to linear and nonlinear constraints, see test_allocs_nlpmodels.
  • exclude takes a Array of Function to be excluded from the tests, see test_allocs_nlpmodels.
source
NLPModelsTest.test_allocs_nlpmodelsMethod
test_allocs_nlpmodels(nlp::AbstractNLPModel; linear_api = false, exclude = [])

Returns a Dict containing allocations of the in-place functions of NLPModel API.

The keyword exclude takes a Array of Function to be excluded from the tests. Use hess (resp. jac) to exclude hess_coord and hess_structure (resp. jac_coord and jac_structure). Setting linear_api to true will also checks the functions specific to linear and nonlinear constraints.

source
NLPModelsTest.test_allocs_nlsmodelsMethod
test_allocs_nlsmodels(nlp::AbstractNLSModel; exclude = [])

Returns a Dict containing allocations of the in-place functions specialized to nonlinear least squares of NLPModel API.

The keyword exclude takes a Array of Function to be excluded from the tests. Use hess_residual (resp. jac_residual) to exclude hess_residual_coord and hess_residual_structure (resp. jac_residual_coord and jac_residual_structure). The hessian-vector product is tested for all the component of the residual function, so exclude hprod_residual and hess_op_residual if you want to avoid this.

source
NLPModelsTest.test_obj_grad!Method
test_obj_grad!(nlp_allocations, nlp::AbstractNLPModel, exclude)

Update nlp_allocations with allocations of the in-place obj and grad functions.

For AbstractNLSModel, this uses obj and grad with pre-allocated residual.

source
NLPModelsTest.test_zero_allocationsMethod
test_zero_allocations(table::Dict, name::String = "Generic")
test_zero_allocations(nlp::AbstractNLPModel; kwargs...)

Test wether the result of test_allocs_nlpmodels(nlp) and test_allocs_nlsmodels(nlp) is 0.

source