Run a benchmark with OptimizationProblems.jl

In this more advanced tutorial, we use the problems from OptimizationProblems to run a benchmark for unconstrained problems. The tutorial will use:

  • JSOSolvers: This package provides optimization solvers in pure Julia for unconstrained and bound-constrained optimization.
  • NLPModelsJuMP: This package convert JuMP model in NLPModel format.
  • SolverBenchmark: This package provides general tools for benchmarking solvers.
using JSOSolvers, NLPModels, NLPModelsJuMP, OptimizationProblems, SolverBenchmark
using OptimizationProblems.PureJuMP

We select the problems from PureJuMP submodule of OptimizationProblems converted in NLPModels using NLPModelsJuMP.

problems = (MathOptNLPModel(eval(Meta.parse(problem))(), name=problem) for problem ∈ OptimizationProblems.meta[!, :name])
Base.Generator{Vector{String}, Main.var"#1#2"}(Main.var"#1#2"(), ["AMPGO02", "AMPGO03", "AMPGO04", "AMPGO05", "AMPGO06", "AMPGO07", "AMPGO08", "AMPGO09", "AMPGO10", "AMPGO11"  …  "triangle", "triangle_deer", "triangle_pacman", "triangle_turtle", "tridia", "vardim", "vibrbeam", "watson", "woods", "zangwil3"])

The same can be achieved using OptimizationProblems.ADNLPProblems instead of OptimizationProblems.PureJuMP as follows:

using ADNLPModels
using OptimizationProblems.ADNLPProblems
ad_problems = (eval(Meta.parse(problem))() for problem ∈ OptimizationProblems.meta[!, :name])
Base.Generator{Vector{String}, Main.var"#3#4"}(Main.var"#3#4"(), ["AMPGO02", "AMPGO03", "AMPGO04", "AMPGO05", "AMPGO06", "AMPGO07", "AMPGO08", "AMPGO09", "AMPGO10", "AMPGO11"  …  "triangle", "triangle_deer", "triangle_pacman", "triangle_turtle", "tridia", "vardim", "vibrbeam", "watson", "woods", "zangwil3"])

We also define a dictionary of solvers that will be used for our benchmark. We consider here JSOSolvers.lbfgs and JSOSolvers.trunk.

solvers = Dict(
  :lbfgs => model -> lbfgs(model, mem=5, atol=1e-5, rtol=0.0),
  :trunk => model -> trunk(model, atol=1e-5, rtol=0.0),
)
Dict{Symbol, Function} with 2 entries:
  :trunk => #6
  :lbfgs => #5

The function SolverBenchmark.bmak_solvers will run all the problems on the specified solvers and store the results in a DataFrame. At this stage, we discard the problems that have constraints or bounds using !unconstrained(prob), and those that are too large or too small with get_nvar(prob) > 100 || get_nvar(prob) < 5.

stats = bmark_solvers(
  solvers, problems,
  skipif=prob -> (!unconstrained(prob) || get_nvar(prob) > 100 || get_nvar(prob) < 5),
)

We can explore the results solver by solver in stats[:lbfgs] and stats[:trunk], or get a profile wall using SolverBenchmark.profile_solvers.

cols = [:id, :name, :nvar, :objective, :dual_feas, :neval_obj, :neval_grad, :neval_hess, :iter, :elapsed_time, :status]
header = Dict(
  :nvar => "n",
  :objective => "f(x)",
  :dual_feas => "‖∇f(x)‖",
  :neval_obj => "# f",
  :neval_grad => "# ∇f",
  :neval_hess => "# ∇²f",
  :elapsed_time => "t",
)

for solver ∈ keys(solvers)
  pretty_stats(stats[solver][!, cols], hdr_override=header)
end
first_order(df) = df.status .== :first_order
unbounded(df) = df.status .== :unbounded
solved(df) = first_order(df) .| unbounded(df)
costnames = ["time", "obj + grad + hess"]
costs = [
  df -> .!solved(df) .* Inf .+ df.elapsed_time,
  df -> .!solved(df) .* Inf .+ df.neval_obj .+ df.neval_grad .+ df.neval_hess,
]

using Plots
gr()

profile_solvers(stats, costs, costnames)

It is also possible to select problems when initializing the problem list by filtering OptimizationProblems.meta:

meta = OptimizationProblems.meta
problem_list = meta[(meta.ncon .== 0) .& .!meta.has_bounds .& (5 .<= meta.nvar .<= 100), :name]
problems = (MathOptNLPModel(eval(Meta.parse(problem))(), name=problem) for problem ∈ problem_list)