SolverTest

SolverTest is a package to test JSO-compliant solvers, both for general optimization problems and for nonlinear least-squares problems. It should be pkg> added to [extras] and to the test target in [targets].

The following functions are available:

SolverTest.unconstrained_nlpFunction
unconstrained_nlp(solver; problem_set = unconstrained_nlp_set(), atol = 1e-6, rtol = 1e-6)

Test the solver on unconstrained problems. If rtol is non-zero, the relative error uses the gradient at the initial guess.

source
SolverTest.bound_constrained_nlpFunction
bound_constrained_nlp(solver; problem_set = bound_constrained_nlp_set(), atol = 1e-6, rtol = 1e-6)

Test the solver on bound-constrained problems. If rtol is non-zero, the relative error uses the gradient at the initial guess.

source
SolverTest.equality_constrained_nlpFunction
equality_constrained_nlp(solver; problem_set = equality_constrained_nlp_set(), atol = 1e-6, rtol = 1e-6)

Test the solver on equality-constrained problems. If rtol is non-zero, the relative error uses the gradient at the initial guess.

source
SolverTest.unconstrained_nlsFunction
unconstrained_nls(solver; problem_set = unconstrained_nls_set(), atol = 1e-6, rtol = 1e-6)

Test the solver on unconstrained nonlinear least-squares problems. If rtol is non-zero, the relative error uses the gradient at the initial guess.

source
SolverTest.bound_constrained_nlsFunction
bound_constrained_nls(solver; problem_set = bound_constrained_nls_set(), atol = 1e-6, rtol = 1e-6)

Test the solver on bound-constrained nonlinear least-squares problems. If rtol is non-zero, the relative error uses the gradient at the initial guess.

source
SolverTest.equality_constrained_nlsFunction
equality_constrained_nls(solver; problem_set = equality_constrained_nls_set(), atol = 1e-6, rtol = 1e-6)

Test the solver on equality-constrained problems. If rtol is non-zero, the relative error uses the gradient at the initial guess.

source
SolverTest.multiprecision_nlpFunction
multiprecision_nlp(solver, problem_type; precisions=[Float16, …, BigFloat])

Test that solver solves a problem of type problem_type on various precisions. The problem_type can be

  • :unc
  • :bnd
  • :equ
  • :ineq
  • :eqnbnd
  • :gen
source
SolverTest.multiprecision_nlsFunction
multiprecision_nls(solver, problem_type; precisions=[Float16, …, BigFloat])

Test that solver solves a problem of type problem_type on various precisions. The problem_type can be

  • :unc
  • :bnd
  • :equ
  • :ineq
  • :eqnbnd
  • :gen
source

Auxiliary funcions

SolverTest.kkt_checkerFunction
kkt_checker(nlp, sol; kwargs...)

Given an NLPModels nlp and a vector sol, it returns the KKT residual of an optimization problem as a tuple (primal, dual). In particular, it uses ripqp to solve the following quadratic optimization problem with linear constraints

min_{d} ∇f(sol)ᵀd +  ½∥d∥²
        lvar ≤ sol + d ≤ uvar
        lcon ≤ c(sol) + ∇c(sol)d ≤ ucon

The solution of this problem is the gradient of the Lagrangian of the nlp at sol thanks to the ½ ‖d‖² term in the objective.

Keyword arguments are passed to RipQP.

source