Reference
Contents
Index
NLPModelsTest.BNDROSENBROCK
NLPModelsTest.BROWNDEN
NLPModelsTest.HS10
NLPModelsTest.HS11
NLPModelsTest.HS13
NLPModelsTest.HS14
NLPModelsTest.HS5
NLPModelsTest.HS6
NLPModelsTest.LINCON
NLPModelsTest.LINSV
NLPModelsTest.LLS
NLPModelsTest.MGH01
NLPModelsTest.MGH01Feas
NLPModelsTest.NLSHS20
NLPModelsTest.NLSLC
NLPModelsTest.check_nlp_dimensions
NLPModelsTest.check_nls_dimensions
NLPModelsTest.consistent_nlps
NLPModelsTest.consistent_nlss
NLPModelsTest.coord_memory_nlp
NLPModelsTest.gradient_check
NLPModelsTest.hessian_check
NLPModelsTest.hessian_check_from_grad
NLPModelsTest.jacobian_check
NLPModelsTest.multiple_precision_nlp
NLPModelsTest.multiple_precision_nlp_array
NLPModelsTest.multiple_precision_nls
NLPModelsTest.multiple_precision_nls_array
NLPModelsTest.print_nlp_allocations
NLPModelsTest.test_allocs_nlpmodels
NLPModelsTest.test_allocs_nlsmodels
NLPModelsTest.test_obj_grad!
NLPModelsTest.test_zero_allocations
NLPModelsTest.view_subarray_nlp
NLPModelsTest.view_subarray_nls
NLPModelsTest.BNDROSENBROCK
— Typenls = BNDROSENBROCK()
Rosenbrock function in nonlinear least squares format with bound constraints.
\[\begin{aligned} \min \quad & \tfrac{1}{2}\| F(x) \|^2 \text{s. to} \quad & -1 \leq x_1 \leq 0.8 \\ & -2 \leq x_2 \leq 2 \end{aligned}\]
where
\[F(x) = \begin{bmatrix} 1 - x_1 \\ 10 (x_2 - x_1^2) \end{bmatrix}.\]
Starting point: [-1.2; 1]
.
NLPModelsTest.BROWNDEN
— Typenlp = BROWNDEN()
Brown and Dennis function.
Source: Problem 16 in
J.J. Moré, B.S. Garbow and K.E. Hillstrom,
"Testing Unconstrained Optimization Software",
ACM Transactions on Mathematical Software, vol. 7(1), pp. 17-41, 1981
classification SUR2-AN-4-0
\[\min_x \ \sum_{i=1}^{20} \left(\left(x_1 + \tfrac{i}{5} x_2 - e^{i / 5}\right)^2 + \left(x_3 + \sin(\tfrac{i}{5}) x_4 - \cos(\tfrac{i}{5})\right)^2\right)^2\]
Starting point: [25.0; 5.0; -5.0; -1.0]
NLPModelsTest.HS10
— Typenlp = HS10()
Problem 10 in the Hock-Schittkowski suite
\[\begin{aligned} \min \quad & x_1 - x_2 \\ \text{s. to} \quad & -3x_1^2 + 2x_1 x_2 - x_2^2 + 1 \geq 0 \end{aligned}\]
Starting point: [-10; 10]
.
NLPModelsTest.HS11
— Typenlp = HS11()
Problem 11 in the Hock-Schittkowski suite
\[\begin{aligned} \min \quad & (x_1 - 5)^2 + x_2^2 - 25 \\ \text{s. to} \quad & 0 \leq -x_1^2 + x_2 \end{aligned}\]
Starting point: [-4.9; 0.1]
.
NLPModelsTest.HS13
— Typenlp = HS13()
Problem 13 in the Hock-Schittkowski suite
\[\begin{aligned} \min \quad & (x_1 - 2)^2 + x_2^2 \\ \text{s. to} \quad & (1 - x_1)^3 - x_2 \geq 0 \quad & 0 \leq x_1 \\ & 0 \leq x_2 \end{aligned}\]
Starting point: [-2; -2]
.
NLPModelsTest.HS14
— Typenlp = HS14()
Problem 14 in the Hock-Schittkowski suite
\[\begin{aligned} \min \quad & (x_1 - 2)^2 + (x_2 - 1)^2 \\ \text{s. to} \quad & x_1 - 2x_2 = -1 \\ & -\tfrac{1}{4} x_1^2 - x_2^2 + 1 \geq 0 \end{aligned}\]
Starting point: [2; 2]
.
NLPModelsTest.HS5
— Typenlp = HS5()
Problem 5 in the Hock-Schittkowski suite
\[\begin{aligned} \min \quad & \sin(x_1 + x_2) + (x_1 - x_2)^2 - \tfrac{3}{2}x_1 + \tfrac{5}{2}x_2 + 1 \\ \text{s. to} \quad & -1.5 \leq x_1 \leq 4 \\ & -3 \leq x_2 \leq 3 \end{aligned}\]
Starting point: [0.0; 0.0]
.
NLPModelsTest.HS6
— Typenlp = HS6()
Problem 6 in the Hock-Schittkowski suite
\[\begin{aligned} \min \quad & (1 - x_1)^2 \\ \text{s. to} \quad & 10 (x_2 - x_1^2) = 0 \end{aligned}\]
Starting point: [-1.2; 1.0]
.
NLPModelsTest.LINCON
— Typenlp = LINCON()
Linearly constrained problem
\[\begin{aligned} \min \quad & (i + x_i^4) \\ \text{s. to} \quad & x_{15} = 0 \\ & x_{10} + 2x_{11} + 3x_{12} \geq 1 \\ & x_{13} - x_{14} \leq 16 \\ & -11 \leq 5x_8 - 6x_9 \leq 9 \\ & -2x_7 = -1 \\ & 4x_6 = 1 \\ & x_1 + 2x_2 \geq -5 \\ & 3x_1 + 4x_2 \geq -6 \\ & 9x_3 \leq 1 \\ & 12x_4 \leq 2 \\ & 15x_5 \leq 3 \end{aligned}\]
Starting point: zeros(15)
.
NLPModelsTest.LINSV
— Typenlp = LINSV()
Linear problem
\[\begin{aligned} \min \quad & x_1 \\ \text{s. to} \quad & x_1 + x_2 \geq 3 \\ & x_2 \geq 1 \end{aligned}\]
Starting point: [0; 0]
.
NLPModelsTest.LLS
— Typenls = LLS()
Linear least squares
\[\begin{aligned} \min \quad & \tfrac{1}{2}\| F(x) \|^2 \\ \text{s. to} \quad & x_1 + x_2 \geq 0 \end{aligned}\]
where
\[F(x) = \begin{bmatrix} x_1 - x_2 \\ x_1 + x_2 - 2 \\ x_2 - 2 \end{bmatrix}.\]
Starting point: [0; 0]
.
NLPModelsTest.MGH01
— Typenls = MGH01()
Rosenbrock function in nonlinear least squares format
Source: Problem 1 in
J.J. Moré, B.S. Garbow and K.E. Hillstrom,
"Testing Unconstrained Optimization Software",
ACM Transactions on Mathematical Software, vol. 7(1), pp. 17-41, 1981
\[\begin{aligned} \min \quad & \tfrac{1}{2}\| F(x) \|^2 \end{aligned}\]
where
\[F(x) = \begin{bmatrix} 1 - x_1 \\ 10 (x_2 - x_1^2) \end{bmatrix}.\]
Starting point: [-1.2; 1]
.
NLPModelsTest.MGH01Feas
— Typenlp = MGH01Feas()
Rosenbrock function in feasibility format
Source: Problem 1 in
J.J. Moré, B.S. Garbow and K.E. Hillstrom,
"Testing Unconstrained Optimization Software",
ACM Transactions on Mathematical Software, vol. 7(1), pp. 17-41, 1981
\[\begin{aligned} \min \quad & 0 \\ \text{s. to} \quad & x_1 = 1 \\ & 10 (x_2 - x_1^2) = 0. \end{aligned}\]
Starting point: [-1.2; 1]
.
NLPModelsTest.NLSHS20
— Typenls = NLSH20()
Problem 20 in the Hock-Schittkowski suite in nonlinear least squares format
\[\begin{aligned} \min \quad & \tfrac{1}{2}\| F(x) \|^2 \\ \text{s. to} \quad & x_1 + x_2^2 \geq 0 \\ & x_1^2 + x_2 \geq 0 \\ & x_1^2 + x_2^2 -1 \geq 0 \\ & -0.5 \leq x_1 \leq 0.5 \end{aligned}\]
where
\[F(x) = \begin{bmatrix} 1 - x_1 \\ 10 (x_2 - x_1^2) \end{bmatrix}.\]
Starting point: [-2; 1]
.
NLPModelsTest.NLSLC
— Typenls = NLSLC()
Linearly constrained nonlinear least squares problem
\[\begin{aligned} \min \quad & \tfrac{1}{2}\| F(x) \|^2 \\ \text{s. to} \quad & x_{15} = 0 \\ & x_{10} + 2x_{11} + 3x_{12} \geq 1 \\ & x_{13} - x_{14} \leq 16 \\ & -11 \leq 5x_8 - 6x_9 \leq 9 \\ & -2x_7 = -1 \\ & 4x_6 = 1 \\ & x_1 + 2x_2 \geq -5 \\ & 3x_1 + 4x_2 \geq -6 \\ & 9x_3 \leq 1 \\ & 12x_4 \leq 2 \\ & 15x_5 \leq 3 \end{aligned}\]
where
\[F(x) = \begin{bmatrix} x_1^2 - 1 \\ x_2^2 - 2^2 \\ \vdots \\ x_{15}^2 - 15^2 \end{bmatrix}\]
Starting point: zeros(15)
.
NLPModelsTest.check_nlp_dimensions
— Methodcheck_nlp_dimensions(nlp; exclude = [ghjvprod])
Make sure NLP API functions will throw DimensionError if the inputs are not the correct dimension. To make this assertion in your code use
@lencheck size input [more inputs separated by spaces]
NLPModelsTest.check_nls_dimensions
— Methodcheck_nls_dimensions(nlp; exclude = [])
Make sure NLS API functions will throw DimensionError if the inputs are not the correct dimension. To make this assertion in your code use
@lencheck size input [more inputs separated by spaces]
NLPModelsTest.consistent_nlps
— Methodconsistent_nlps(nlps; exclude=[], rtol=1e-8)
Check that the all nlp
s of the vector nlps
are consistent, in the sense that
- Their counters are the same.
- Their
meta
information is the same. - The API functions return the same output given the same input.
In other words, if you create two models of the same problem, they should be consistent.
The keyword exclude
can be used to pass functions to be ignored, if some of the models don't implement that function.
NLPModelsTest.consistent_nlss
— Methodconsistent_nlss(nlps; exclude=[hess, hprod, hess_coord])
Check that the all nls
s of the vector nlss
are consistent, in the sense that
- Their counters are the same.
- Their
meta
information is the same. - The API functions return the same output given the same input.
In other words, if you create two models of the same problem, they should be consistent.
By default, the functions hess
, hprod
and hess_coord
(and therefore associated functions) are excluded from this check, since some models don't implement them.
NLPModelsTest.coord_memory_nlp
— Methodcoord_memory_nlp(nlp; exclude = [])
Check that the allocated memory for in place coord methods is sufficiently smaller than their allocating counter parts.
NLPModelsTest.gradient_check
— Methodgradient_check(nlp; x=nlp.meta.x0, atol=1e-6, rtol=1e-4)
Check the first derivatives of the objective at x
against centered finite differences.
This function returns a dictionary indexed by components of the gradient for which the relative error exceeds rtol
.
NLPModelsTest.hessian_check
— Methodhessian_check(nlp; x=nlp.meta.x0, atol=1e-6, rtol=1e-4, sgn=1)
Check the second derivatives of the objective and each constraints at x
against centered finite differences. This check does not rely on exactness of the first derivatives, only on objective and constraint values.
The sgn
arguments refers to the formulation of the Lagrangian in the problem. It should have a positive value if the Lagrangian is formulated as
\[L(x,y) = f(x) + \sum_j yⱼ cⱼ(x),\]
and a negative value if the Lagrangian is formulated as
\[L(x,y) = f(x) - \sum_j yⱼ cⱼ(x).\]
Only the sign of sgn
is important.
This function returns a dictionary indexed by functions. The 0-th function is the objective while the k-th function (for k > 0) is the k-th constraint. The values of the dictionary are dictionaries indexed by tuples (i, j) such that the relative error in the second derivative ∂²fₖ/∂xᵢ∂xⱼ exceeds rtol
.
NLPModelsTest.hessian_check_from_grad
— Methodhessian_check_from_grad(nlp; x=nlp.meta.x0, atol=1e-6, rtol=1e-4, sgn=1)
Check the second derivatives of the objective and each constraints at x
against centered finite differences. This check assumes exactness of the first derivatives.
The sgn
arguments refers to the formulation of the Lagrangian in the problem. It should have a positive value if the Lagrangian is formulated as
\[L(x,y) = f(x) + \sum_j yⱼ cⱼ(x),\]
and a negative value if the Lagrangian is formulated as
\[L(x,y) = f(x) - \sum_j yⱼ cⱼ(x).\]
Only the sign of sgn
is important.
This function returns a dictionary indexed by functions. The 0-th function is the objective while the k-th function (for k > 0) is the k-th constraint. The values of the dictionary are dictionaries indexed by tuples (i, j) such that the relative error in the second derivative ∂²fₖ/∂xᵢ∂xⱼ exceeds rtol
.
NLPModelsTest.jacobian_check
— Methodjacobian_check(nlp; x=nlp.meta.x0, atol=1e-6, rtol=1e-4)
Check the first derivatives of the constraints at x
against centered finite differences.
This function returns a dictionary indexed by (j, i) tuples such that the relative error in the i
-th partial derivative of the j
-th constraint exceeds rtol
.
NLPModelsTest.multiple_precision_nlp
— Methodmultiple_precision_nlp(nlp_from_T; precisions=[...], exclude = [ghjvprod])
Check that the NLP API functions output type are the same as the input. In other words, make sure that the model handles multiple precisions.
The input nlp_from_T
is a function that returns an nlp
from a type T
. The array precisions
are the tested floating point types. Defaults to [Float16, Float32, Float64, BigFloat]
.
NLPModelsTest.multiple_precision_nlp_array
— Methodmultiple_precision_nlp_array(nlp_from_T, ::Type{S}; precisions=[Float16, Float32, Float64])
Check that the NLP API functions output type are the same as the input. It calls multiple_precision_nlp
on problem type T -> nlp_from_T(S(T))
.
The array precisions
are the tested floating point types. Note that BigFloat
is not tested by default, because it is not supported by CuArray
.
NLPModelsTest.multiple_precision_nls
— Methodmultiple_precision_nls(nls_from_T; precisions=[...], exclude = [])
Check that the NLS API functions output type are the same as the input. In other words, make sure that the model handles multiple precisions.
The input nls_from_T
is a function that returns an nls
from a type T
. The array precisions
are the tested floating point types. Defaults to [Float16, Float32, Float64, BigFloat]
.
NLPModelsTest.multiple_precision_nls_array
— Methodmultiple_precision_nls_array(nlp_from_T, ::Type{S}; precisions=[Float16, Float32, Float64])
Check that the NLS API functions output type are the same as the input. It calls multiple_precision_nls
on problem type T -> nlp_from_T(S(T))
.
The array precisions
are the tested floating point types. Note that BigFloat
is not tested by default, because it is not supported by CuArray
.
NLPModelsTest.print_nlp_allocations
— Methodprint_nlp_allocations([io::IO = stdout], nlp::AbstractNLPModel, table::Dict; only_nonzeros::Bool = false)
print_nlp_allocations([io::IO = stdout], nlp::AbstractNLPModel; kwargs...)
Print in a convenient way the result of test_allocs_nlpmodels(nlp)
.
The keyword arguments may contain:
only_nonzeros::Bool
: shows only non-zeros if true.linear_api::Bool
: checks the functions specific to linear and nonlinear constraints, seetest_allocs_nlpmodels
.exclude
takes a Array of Function to be excluded from the tests, seetest_allocs_nlpmodels
.
NLPModelsTest.test_allocs_nlpmodels
— Methodtest_allocs_nlpmodels(nlp::AbstractNLPModel; linear_api = false, exclude = [])
Returns a Dict
containing allocations of the in-place functions of NLPModel API.
The keyword exclude
takes a Array of Function to be excluded from the tests. Use hess
(resp. jac
) to exclude hess_coord
and hess_structure
(resp. jac_coord
and jac_structure
). Setting linear_api
to true
will also checks the functions specific to linear and nonlinear constraints.
NLPModelsTest.test_allocs_nlsmodels
— Methodtest_allocs_nlsmodels(nlp::AbstractNLSModel; exclude = [])
Returns a Dict
containing allocations of the in-place functions specialized to nonlinear least squares of NLPModel API.
The keyword exclude
takes a Array of Function to be excluded from the tests. Use hess_residual
(resp. jac_residual
) to exclude hess_residual_coord
and hess_residual_structure
(resp. jac_residual_coord
and jac_residual_structure
). The hessian-vector product is tested for all the component of the residual function, so exclude hprod_residual
and hess_op_residual
if you want to avoid this.
NLPModelsTest.test_obj_grad!
— Methodtest_obj_grad!(nlp_allocations, nlp::AbstractNLPModel, exclude)
Update nlp_allocations
with allocations of the in-place obj
and grad
functions.
For AbstractNLSModel
, this uses obj
and grad
with pre-allocated residual.
NLPModelsTest.test_zero_allocations
— Methodtest_zero_allocations(table::Dict, name::String = "Generic")
test_zero_allocations(nlp::AbstractNLPModel; kwargs...)
Test wether the result of test_allocs_nlpmodels(nlp)
and test_allocs_nlsmodels(nlp)
is 0.
NLPModelsTest.view_subarray_nlp
— Methodview_subarray_nlp(nlp; exclude = [])
Check that the API work with views, and that the results is correct.
NLPModelsTest.view_subarray_nls
— Methodview_subarray_nls(nls; exclude = [])
Check that the API work with views, and that the results is correct.