Reference

Contents

Index

Base.joinMethod
df = join(stats, cols; kwargs...)

Join a dictionary of DataFrames given by stats. Column :id is required in all DataFrames. The resulting DataFrame will have column id and all columns cols for each solver.

Inputs:

  • stats::Dict{Symbol,DataFrame}: Dictionary of DataFrames per solver. Each key is a different solver;
  • cols::Array{Symbol}: Which columns of the DataFrames.

Keyword arguments:

  • invariant_cols::Array{Symbol,1}: Invariant columns to be added, i.e., columns that don't change depending on the solver (such as name of problem, number of variables, etc.);
  • hdr_override::Dict{Symbol,String}: Override header names.

Output:

  • df::DataFrame: Resulting dataframe.
source
BenchmarkProfiles.performance_profileMethod
performance_profile(stats, cost, args...; b = PlotsBackend(), kwargs...)

Produce a performance profile comparing solvers in stats using the cost function.

Inputs:

  • stats::Dict{Symbol,DataFrame}: pairs of :solver => df;
  • cost::Function: cost function applyed to each df. Should return a vector with the cost of solving the problem at each row;
    • 0 cost is not allowed;
    • If the solver did not solve the problem, return Inf or a negative number.
  • b::BenchmarkProfiles.AbstractBackend : backend used for the plot.

Examples of cost functions:

  • cost(df) = df.elapsed_time: Simple elapsed_time cost. Assumes the solver solved the problem.
  • cost(df) = (df.status .!= :first_order) * Inf + df.elapsed_time: Takes into consideration the status of the solver.
source
SolverBenchmark.LTXformatFunction
LTXformat(x)

Format x according to its type. For types Signed, AbstractFloat, AbstractString and Symbol, use a predefined formatting string passed to @sprintf and then the corresponding safe_latex_<type> function.

For type Missing, return "NA".

source
SolverBenchmark.MDformatFunction
MDformat(x)

Format x according to its type. For types Signed, AbstractFloat, AbstractString and Symbol, use a predefined formatting string passed to @sprintf.

For type Missing, return "NA".

source
SolverBenchmark.bmark_results_to_dataframesMethod
stats = bmark_results_to_dataframes(results)

Convert PkgBenchmark results to a dictionary of DataFrames. The benchmark SUITE should have been constructed in the form

SUITE[solver][case] = ...

where solver will be recorded as one of the solvers to be compared in the DataFrame and case is a test case. For example:

SUITE["CG"]["BCSSTK09"] = @benchmarkable ...
SUITE["LBFGS"]["ROSENBR"] = @benchmarkable ...

Inputs:

  • results::BenchmarkResults: the result of PkgBenchmark.benchmarkpkg

Output:

  • stats::Dict{Symbol,DataFrame}: a dictionary of DataFrames containing the benchmark results per solver.
source
SolverBenchmark.bmark_solversMethod
bmark_solvers(solvers :: Dict{Symbol,Any}, args...; kwargs...)

Run a set of solvers on a set of problems.

Arguments

  • solvers: a dictionary of solvers to which each problem should be passed
  • other positional arguments accepted by solve_problems, except for a solver name

Keyword arguments

Any keyword argument accepted by solve_problems

Return value

A Dict{Symbol, AbstractExecutionStats} of statistics.

source
SolverBenchmark.count_uniqueMethod
vals = count_unique(X)

Count the number of occurrences of each value in X.

Arguments

  • X: an iterable.

Return value

A Dict{eltype(X),Int} whose keys are the unique elements in X and values are their number of occurrences.

Example: the snippet

stats = load_stats("mystats.jld2")
for solver ∈ keys(stats)
  @info "$solver statuses" count_unique(stats[solver].status)
end

displays the number of occurrences of each final status for each solver in stats.

source
SolverBenchmark.format_tableMethod
format_table(df, formatter, kwargs...)

Format the data frame into a table using formatter. Used by other table functions.

Inputs:

  • df::DataFrame: Dataframe of a solver. Each row is a problem.
  • formatter::Function: A function that formats its input according to its type. See LTXformat or MDformat for examples.

Keyword arguments:

  • cols::Array{Symbol}: Which columns of the df. Defaults to using all columns;

  • ignore_missing_cols::Bool: If true, filters out the columns in cols that don't exist in the data frame. Useful when creating tables for solvers in a loop where one solver has a column the other doesn't. If false, throws BoundsError in that situation.

  • fmt_override::Dict{Symbol,Function}: Overrides format for a specific column, such as

    fmt_override=Dict(:name => x->@sprintf("%-10s", x))

  • hdr_override::Dict{Symbol,String}: Overrides header names, such as hdr_override=Dict(:name => "Name").

Outputs:

  • header::Array{String,1}: header vector.
  • table::Array{String,2}: formatted table.
source
SolverBenchmark.gradient_highlighterMethod
hl = gradient_highlighter(df, col; cmap=:coolwarm)

A PrettyTables highlighter the applies a color gradient to the values in columns given by cols.

Input Arguments

  • df::DataFrame dataframe to which the highlighter will be applied;
  • col::Symbol a symbol to indicate which column the highlighter will be applied to.

Keyword Arguments

  • cmap::Symbol color scheme to use, from ColorSchemes.
source
SolverBenchmark.judgement_results_to_dataframesMethod
stats = judgement_results_to_dataframes(judgement)

Convert BenchmarkJudgement results to a dictionary of DataFrames.

Inputs:

  • judgement::BenchmarkJudgement: the result of, e.g.,

    commit = benchmarkpkg(mypkg)  # benchmark a commit or pull request
    main = benchmarkpkg(mypkg, "main")  # baseline benchmark
    judgement = judge(commit, main)

Output:

  • stats::Dict{Symbol,Dict{Symbol,DataFrame}}: a dictionary of Dict{Symbol,DataFrame}s containing the target and baseline benchmark results. The elements of this dictionary are the same as those returned by bmark_results_to_dataframes(main) and bmark_results_to_dataframes(commit).
source
SolverBenchmark.latex_tableMethod
latex_table(io, df, kwargs...)

Create a latex longtable of a DataFrame using LaTeXTabulars, and format the output for a publication-ready table.

Inputs:

  • io::IO: where to send the table, e.g.:

    open("file.tex", "w") do io
      latex_table(io, df)
    end

    If left out, io defaults to stdout.

  • df::DataFrame: Dataframe of a solver. Each row is a problem.

Keyword arguments:

  • cols::Array{Symbol}: Which columns of the df. Defaults to using all columns;

  • ignore_missing_cols::Bool: If true, filters out the columns in cols that don't exist in the data frame. Useful when creating tables for solvers in a loop where one solver has a column the other doesn't. If false, throws BoundsError in that situation.

  • fmt_override::Dict{Symbol,Function}: Overrides format for a specific column, such as

    fmt_override=Dict(:name => x->@sprintf("\textbf{%s}", x) |> safe_latex_AbstractString)`
  • hdr_override::Dict{Symbol,String}: Overrides header names, such as hdr_override=Dict(:name => "Name"), where LaTeX escaping should be used if necessary.

We recommend using the safe_latex_foo functions when overriding formats, unless you're sure you don't need them.

source
SolverBenchmark.load_statsMethod
stats = load_stats(filename; kwargs...)

Arguments

  • filename::AbstractString: the input file name.

Keyword arguments

  • key::String="stats": the key under which the data can be read in filename. The key should be the same as the one used when save_stats was called.

Return value

A Dict{Symbol,DataFrame} containing the statistics stored in file filename. The user should import DataFrames before calling load_stats.

source
SolverBenchmark.markdown_tableMethod
markdown_table(io, df, kwargs...)

Create a markdown table from a DataFrame using PrettyTables and format the output.

Inputs:

  • io::IO: where to send the table, e.g.:

    open("file.md", "w") do io
      markdown_table(io, df)
    end

    If left out, io defaults to stdout.

  • df::DataFrame: Dataframe of a solver. Each row is a problem.

Keyword arguments:

  • hl: a highlighter or tuple of highlighters to color individual cells (when output to screen). By default, we use a simple passfail_highlighter.

  • all other keyword arguments are passed directly to format_table.

source
SolverBenchmark.passfail_highlighterFunction
hl = passfail_highlighter(df, c=crayon"bold red")

A PrettyTables highlighter that colors failures in bold red by default.

Input Arguments

  • df::DataFrame dataframe to which the highlighter will be applied. df must have the id column.

If df has the :status property, the highlighter will be applied to rows for which df.status indicates a failure. A failure is any status different from :first_order or :unbounded.

source
SolverBenchmark.passfail_latex_highlighterFunction
hl = passfail_latex_highlighter(df)

A PrettyTables LaTeX highlighter that colors failures in bold red by default.

See the documentation of passfail_highlighter for more information.

source
SolverBenchmark.pretty_latex_statsMethod
pretty_latex_stats(df; kwargs...)

Pretty-print a DataFrame as a LaTeX longtable using PrettyTables.

See the pretty_stats documentation. Specific settings in this method are:

  • the backend is set to :latex;
  • the table type is set to :longtable;
  • highlighters, if any, should be LaTeX highlighters.

See the PrettyTables documentation for more information.

source
SolverBenchmark.pretty_statsMethod
pretty_stats(df; kwargs...)

Pretty-print a DataFrame using PrettyTables.

Arguments

  • io::IO: an IO stream to which the table will be output (default: stdout);
  • df::DataFrame: the DataFrame to be displayed. If only certain columns of df should be displayed, they should be extracted explicitly, e.g., by passing df[!, [:col1, :col2, :col3]].

Keyword Arguments

  • col_formatters::Dict{Symbol, String}: a Dict of format strings to apply to selected columns of df. The keys of col_formatters should be symbols, so that specific formatting can be applied to specific columns. By default, default_formatters is used, based on the column type. If PrettyTables formatters are passed using the formatters keyword argument, they are applied before those in col_formatters.

  • hdr_override::Dict{Symbol, String}: a Dict of those headers that should be displayed differently than simply according to the column name (default: empty). Example: Dict(:col1 => "column 1").

All other keyword arguments are passed directly to pretty_table. In particular,

  • use tf=tf_markdown to display a Markdown table;
  • do not use this function for LaTeX output; use pretty_latex_stats instead;
  • any PrettyTables highlighters can be given, but see the predefined passfail_highlighter and gradient_highlighter.
source
SolverBenchmark.profile_packageMethod
p = profile_package(judgement)

Produce performance profiles based on PkgBenchmark.BenchmarkJudgement results.

Inputs:

  • judgement::BenchmarkJudgement: the result of, e.g.,

    commit = benchmarkpkg(mypkg)  # benchmark a commit or pull request
    main = benchmarkpkg(mypkg, "main")  # baseline benchmark
    judgement = judge(commit, main)
source
SolverBenchmark.profile_solversMethod
p = profile_solvers(stats, costs, costnames;
                    width = 400, height = 400,
                    b = PlotsBackend(), kwargs...)

Produce performance profiles comparing solvers based on the data in stats.

Inputs:

  • stats::Dict{Symbol,DataFrame}: a dictionary of DataFrames containing the benchmark results per solver (e.g., produced by bmark_results_to_dataframes())
  • costs::Vector{Function}: a vector of functions specifying the measures to use in the profiles
  • costnames::Vector{String}: names to be used as titles of the profiles.

Keyword inputs:

  • width::Int: Width of each individual plot (Default: 400)
  • height::Int: Height of each individual plot (Default: 400)
  • b::BenchmarkProfiles.AbstractBackend : backend used for the plot.

Additional kwargs are passed to the plot call.

Output: A Plots.jl plot representing a set of performance profiles comparing the solvers. The set contains performance profiles comparing all the solvers together on the measures given in costs. If there are more than two solvers, additional profiles are produced comparing the solvers two by two on each cost measure.

source
SolverBenchmark.profile_solversMethod
p = profile_solvers(results)

Produce performance profiles based on PkgBenchmark.benchmarkpkg results.

Inputs:

  • results::BenchmarkResults: the result of PkgBenchmark.benchmarkpkg.
source
SolverBenchmark.quick_summaryMethod
statuses, avgs = quick_summary(stats; kwargs...)

Call count_unique and compute a few average measures for each solver in stats.

Arguments

  • stats::Dict{Symbol,DataFrame}: benchmark statistics such as returned by bmark_solvers.

Keyword arguments

  • cols::Vector{Symbol}: symbols indicating DataFrame columns in solver statistics for which we compute averages. Default: [:iter, :neval_obj, :neval_grad, :neval_hess, :neval_hprod, :elapsed_time].

Return value

  • statuses::Dict{Symbol,Dict{Symbol,Int}}: a dictionary of number of occurrences of each final status for each solver in stats. Each value in this dictionary is returned by count_unique
  • avgs::Dict{Symbol,Dict{Symbol,Float64}}: a dictionary that contains averages of performance measures across all problems for each solver. Each avgs[solver] is a Dict{Symbol,Float64} where the measures are those given in the keyword argument cols and values are averages of those measures across all problems.

Example: the snippet

statuses, avgs = quick_summary(stats)
for solver ∈ keys(stats)
  @info "statistics for" solver statuses[solver] avgs[solver]
end

displays quick summary and averages for each solver.

source
SolverBenchmark.safe_latex_AbstractFloatMethod
safe_latex_AbstractFloat(s::AbstractString)

Format the string representation of floats for output in a LaTeX table. Replaces infinite values with the \infty LaTeX sequence. If the float is represented in exponential notation, the mantissa and exponent are wrapped in math delimiters. Otherwise, the entire float is wrapped in math delimiters.

source
SolverBenchmark.save_statsMethod
save_stats(stats, filename; kwargs...)

Write the benchmark statistics stats to a file named filename.

Arguments

  • stats::Dict{Symbol,DataFrame}: benchmark statistics such as returned by bmark_solvers
  • filename::AbstractString: the output file name.

Keyword arguments

  • force::Bool=false: whether to overwrite filename if it already exists
  • key::String="stats": the key under which the data can be read from filename later.

Return value

This method returns an error if filename exists and force==false. On success, it returns the value of jldopen(filename, "w").

source
SolverBenchmark.solve_problemsMethod
solve_problems(solver, problems; kwargs...)

Apply a solver to a set of problems.

Arguments

  • solver: the function name of a solver;
  • problems: the set of problems to pass to the solver, as an iterable of AbstractNLPModel. It is recommended to use a generator expression (necessary for CUTEst problems).

Keyword arguments

  • solver_logger::AbstractLogger: logger wrapping the solver call (default: NullLogger);
  • reset_problem::Bool: reset the problem's counters before solving (default: true);
  • skipif::Function: function to be applied to a problem and return whether to skip it (default: x->false);
  • colstats::Vector{Symbol}: summary statistics for the logger to output during the

benchmark (default: [:name, :nvar, :ncon, :status, :elapsed_time, :objective, :dual_feas, :primal_feas]);

  • info_hdr_override::Dict{Symbol,String}: header overrides for the summary statistics (default: use default headers);
  • prune: do not include skipped problems in the final statistics (default: true);
  • any other keyword argument to be passed to the solver.

Return value

  • a DataFrame where each row is a problem, minus the skipped ones if prune is true.
source
SolverBenchmark.to_gistMethod
posted_gist = to_gist(results)

Create and post a gist with the benchmark results and performance profiles.

Inputs:

  • results::BenchmarkResults: the result of PkgBenchmark.benchmarkpkg

Output:

  • the return value of GitHub.jl's create_gist.
source
SolverBenchmark.to_gistMethod
posted_gist = to_gist(results, p)

Create and post a gist with the benchmark results and performance profiles.

Inputs:

  • results::BenchmarkResults: the result of PkgBenchmark.benchmarkpkg
  • p:: the result of profile_solvers.

Output:

  • the return value of GitHub.jl's create_gist.
source