SolverTest
SolverTest is a package to test JSO-compliant solvers, both for general optimization problems and for nonlinear least-squares problems. It should be pkg> add
ed to [extras]
and to the test
target in [targets]
.
The following functions are available:
SolverTest.unconstrained_nlp
— Functionunconstrained_nlp(solver; problem_set = unconstrained_nlp_set(), atol = 1e-6, rtol = 1e-6)
Test the solver
on unconstrained problems. If rtol
is non-zero, the relative error uses the gradient at the initial guess.
SolverTest.bound_constrained_nlp
— Functionbound_constrained_nlp(solver; problem_set = bound_constrained_nlp_set(), atol = 1e-6, rtol = 1e-6)
Test the solver
on bound-constrained problems. If rtol
is non-zero, the relative error uses the gradient at the initial guess.
SolverTest.equality_constrained_nlp
— Functionequality_constrained_nlp(solver; problem_set = equality_constrained_nlp_set(), atol = 1e-6, rtol = 1e-6)
Test the solver
on equality-constrained problems. If rtol
is non-zero, the relative error uses the gradient at the initial guess.
SolverTest.unconstrained_nls
— Functionunconstrained_nls(solver; problem_set = unconstrained_nls_set(), atol = 1e-6, rtol = 1e-6)
Test the solver
on unconstrained nonlinear least-squares problems. If rtol
is non-zero, the relative error uses the gradient at the initial guess.
SolverTest.bound_constrained_nls
— Functionbound_constrained_nls(solver; problem_set = bound_constrained_nls_set(), atol = 1e-6, rtol = 1e-6)
Test the solver
on bound-constrained nonlinear least-squares problems. If rtol
is non-zero, the relative error uses the gradient at the initial guess.
SolverTest.equality_constrained_nls
— Functionequality_constrained_nls(solver; problem_set = equality_constrained_nls_set(), atol = 1e-6, rtol = 1e-6)
Test the solver
on equality-constrained problems. If rtol
is non-zero, the relative error uses the gradient at the initial guess.
SolverTest.multiprecision_nlp
— Functionmultiprecision_nlp(solver, problem_type; precisions=[Float16, …, BigFloat])
Test that solver
solves a problem of type problem_type
on various precisions
. The problem_type
can be
- :unc
- :bnd
- :equ
- :ineq
- :eqnbnd
- :gen
SolverTest.multiprecision_nls
— Functionmultiprecision_nls(solver, problem_type; precisions=[Float16, …, BigFloat])
Test that solver
solves a problem of type problem_type
on various precisions
. The problem_type
can be
- :unc
- :bnd
- :equ
- :ineq
- :eqnbnd
- :gen
Auxiliary funcions
SolverTest.kkt_checker
— Functionkkt_checker(nlp, sol; kwargs...)
Given an NLPModels nlp
and a vector sol
, it returns the KKT residual of an optimization problem as a tuple (primal, dual). In particular, it uses ripqp
to solve the following quadratic optimization problem with linear constraints
min_{d} ∇f(sol)ᵀd + ½∥d∥²
lvar ≤ sol + d ≤ uvar
lcon ≤ c(sol) + ∇c(sol)d ≤ ucon
The solution of this problem is the gradient of the Lagrangian of the nlp
at sol
thanks to the ½ ‖d‖² term in the objective.
Keyword arguments are passed to RipQP
.