NLPModelsTest.jl documentation

This package provides testing functions for packages implementing optimization models using the NLPModels API.

Usage

This packages export commonly used problems and functions to test optimization models using the NLPModels API. There are currently the following tests in this package:

  • Consistency: Given 2 or more models of the same problem, do they behave the same way?
  • Multiple precision: Given a model in a floating point type, do the API functions output have the same type?
  • Input dimension check: Do the functions in this model correctly check the input dimensions, and throw the correct error otherwise?
  • View subarray support: Check that your model accepts @view subarrays.
  • Coord memory: (incomplete) Check that in place version of coord functions don't use too much memory.

The TL;DR section shows an example using these functions.

Consistency

Two functions are given, one for NLP problems and another for NLS problems:

NLPModelsTest.consistent_nlpsFunction
consistent_nlps(nlps; exclude=[], rtol=1e-8)

Check that the all nlps of the vector nlps are consistent, in the sense that

  • Their counters are the same.
  • Their meta information is the same.
  • The API functions return the same output given the same input.

In other words, if you create two models of the same problem, they should be consistent.

The keyword exclude can be used to pass functions to be ignored, if some of the models don't implement that function.

source
NLPModelsTest.consistent_nlssFunction
consistent_nlss(nlps; exclude=[hess, hprod, hess_coord])

Check that the all nlss of the vector nlss are consistent, in the sense that

  • Their counters are the same.
  • Their meta information is the same.
  • The API functions return the same output given the same input.

In other words, if you create two models of the same problem, they should be consistent.

By default, the functions hess, hprod and hess_coord (and therefore associated functions) are excluded from this check, since some models don't implement them.

source

To use them, implement a few or all of these Problems, and call these functions on an array with both the model you created and the model we have here.

Multiple precision

Two functions are given, one for NLP problems and another for NLS problems:

NLPModelsTest.multiple_precision_nlpFunction
multiple_precision_nlp(nlp_from_T; precisions=[...], exclude = [ghjvprod])

Check that the NLP API functions output type are the same as the input. In other words, make sure that the model handles multiple precisions.

The input nlp_from_T is a function that returns an nlp from a type T. The array precisions are the tested floating point types. Defaults to [Float16, Float32, Float64, BigFloat].

source
NLPModelsTest.multiple_precision_nlsFunction
multiple_precision_nls(nls_from_T; precisions=[...], exclude = [])

Check that the NLS API functions output type are the same as the input. In other words, make sure that the model handles multiple precisions.

The input nls_from_T is a function that returns an nls from a type T. The array precisions are the tested floating point types. Defaults to [Float16, Float32, Float64, BigFloat].

source

To use this function simply call it on your model.

Check dimensions

Two functions are given, one for NLP problems and another for NLS problems:

NLPModelsTest.check_nlp_dimensionsFunction
check_nlp_dimensions(nlp; exclude = [ghjvprod])

Make sure NLP API functions will throw DimensionError if the inputs are not the correct dimension. To make this assertion in your code use

@lencheck size input [more inputs separated by spaces]
source
NLPModelsTest.check_nls_dimensionsFunction
check_nls_dimensions(nlp; exclude = [])

Make sure NLS API functions will throw DimensionError if the inputs are not the correct dimension. To make this assertion in your code use

@lencheck size input [more inputs separated by spaces]
source

To use this function simply call it on your model.

View subarray support

Two functions are given, one for NLP problems and another for NLS problems:

To use this function simply call it on your model.

Coordinate functions memory usage

Disclaimer: This function is incomplete.

NLPModelsTest.coord_memory_nlpFunction
coord_memory_nlp(nlp; exclude = [])

Check that the allocated memory for in place coord methods is sufficiently smaller than their allocating counter parts.

source

Derivative Checker

Inside the consistency check, the following functions are used to check whether the derivatives are correct. You can also use the manually.

NLPModelsTest.gradient_checkFunction
gradient_check(nlp; x=nlp.meta.x0, atol=1e-6, rtol=1e-4)

Check the first derivatives of the objective at x against centered finite differences.

This function returns a dictionary indexed by components of the gradient for which the relative error exceeds rtol.

source
NLPModelsTest.jacobian_checkFunction
jacobian_check(nlp; x=nlp.meta.x0, atol=1e-6, rtol=1e-4)

Check the first derivatives of the constraints at x against centered finite differences.

This function returns a dictionary indexed by (j, i) tuples such that the relative error in the i-th partial derivative of the j-th constraint exceeds rtol.

source
NLPModelsTest.hessian_check_from_gradFunction
hessian_check_from_grad(nlp; x=nlp.meta.x0, atol=1e-6, rtol=1e-4, sgn=1)

Check the second derivatives of the objective and each constraints at x against centered finite differences. This check assumes exactness of the first derivatives.

The sgn arguments refers to the formulation of the Lagrangian in the problem. It should have a positive value if the Lagrangian is formulated as

\[L(x,y) = f(x) + \sum_j yⱼ cⱼ(x),\]

and a negative value if the Lagrangian is formulated as

\[L(x,y) = f(x) - \sum_j yⱼ cⱼ(x).\]

Only the sign of sgn is important.

This function returns a dictionary indexed by functions. The 0-th function is the objective while the k-th function (for k > 0) is the k-th constraint. The values of the dictionary are dictionaries indexed by tuples (i, j) such that the relative error in the second derivative ∂²fₖ/∂xᵢ∂xⱼ exceeds rtol.

source
NLPModelsTest.hessian_checkFunction
hessian_check(nlp; x=nlp.meta.x0, atol=1e-6, rtol=1e-4, sgn=1)

Check the second derivatives of the objective and each constraints at x against centered finite differences. This check does not rely on exactness of the first derivatives, only on objective and constraint values.

The sgn arguments refers to the formulation of the Lagrangian in the problem. It should have a positive value if the Lagrangian is formulated as

\[L(x,y) = f(x) + \sum_j yⱼ cⱼ(x),\]

and a negative value if the Lagrangian is formulated as

\[L(x,y) = f(x) - \sum_j yⱼ cⱼ(x).\]

Only the sign of sgn is important.

This function returns a dictionary indexed by functions. The 0-th function is the objective while the k-th function (for k > 0) is the k-th constraint. The values of the dictionary are dictionaries indexed by tuples (i, j) such that the relative error in the second derivative ∂²fₖ/∂xᵢ∂xⱼ exceeds rtol.

source

Allocation tracking

This package has functions to track allocations of an NLPModel. You can check the tutorial Test allocations of NLPModels on our site, juliasmoothoptimizers.github.io.

TL;DR

TODO after CUTEst.jl and NLPModelsJuMP are updated.

License

This content is released under the MPL2.0 License.

Bug reports and discussions

If you think you found a bug, feel free to open an issue. Focused suggestions and requests can also be opened as issues. Before opening a pull request, start an issue or a discussion on the topic, please.

If you want to ask a question not suited for a bug report, feel free to start a discussion here. This forum is for general discussion about this repository and the JuliaSmoothOptimizers, so questions about any of our packages are welcome.

Contents