Reference

Contents

Index

Percival.AugLagModelType
AugLagModel(model, y, μ, x, fx, cx)

Given a model

\[\min \ f(x) \quad s.t. \quad c(x) = 0, \quad l ≤ x ≤ u,\]

this new model represents the subproblem of the augmented Lagrangian method

\[\min \ f(x) - yᵀc(x) + \tfrac{1}{2} μ \|c(x)\|^2 \quad s.t. \quad l ≤ x ≤ u,\]

where y is an estimates of the Lagrange multiplier vector and μ is the penalty parameter.

In addition to keeping meta and counters as any NLPModel, an AugLagModel also stores

  • model: The internal model defining $f$, $c$ and the bounds,
  • y: The multipliers estimate,
  • μ: The penalty parameter,
  • x: Reference to the last point at which the function c(x) was computed,
  • fx: Reference to f(x),
  • cx: Reference to c(x),
  • μc_y: storage for y - μ * cx,
  • store_Jv and store_JtJv: storage used in hprod!.

Use the functions update_cx!, update_y! and update_μ! to update these values.

source
Percival.PercivalSolverType
percival(nlp)

A factorization-free augmented Lagrangian for nonlinear optimization.

For advanced usage, first define a PercivalSolver to preallocate the memory used in the algorithm, and then call solve!:

solver = PercivalSolver(nlp)
solve!(solver, nlp)

Arguments

  • nlp::AbstractNLPModel{T, V} is the model to solve, see NLPModels.jl.

Keyword arguments

  • x::V = nlp.meta.x0: the initial guess;
  • atol::T = T(1e-8): absolute tolerance;
  • rtol::T = T(1e-8): relative tolerance;
  • ctol::T = T(1e-8): absolute tolerance on the feasibility;
  • max_eval::Int = 100000: maximum number of evaluation of the objective function;
  • max_time::Float64 = 30.0: maximum time limit in seconds;
  • max_iter::Int = 2000: maximum number of iterations;
  • verbose::Int = 0: if > 0, display iteration details every verbose iteration;
  • μ::Real = T(10.0): Starting value of the penalty parameter;
  • subsolver_logger::AbstractLogger = NullLogger(): logger passed to tron;
  • cgls_verbose::Int = 0: verbosity level in Krylov.cgls;
  • inity::Bool = false: If true the algorithm uses Krylov.cgls to compute an approximation, otherwise we use nlp.meta.y0;

other kwargs are passed to the subproblem solver.

The algorithm stops when $‖c(xᵏ)‖ ≤ ctol$ and $‖P∇L(xᵏ,λᵏ)‖ ≤ atol + rtol * ‖P∇L(x⁰,λ⁰)‖$ where $P∇L(x,λ) := Proj_{l,u}(x - ∇f(x) + ∇c(x)ᵀλ) - x$.

Output

The value returned is a GenericExecutionStats, see SolverCore.jl.

Callback

The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats), and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user will stop the algorithm. All relevant information should be available in nlp and solver. Notably, you can access, and modify, the following:

  • solver.x: current iterate;
  • solver.gx: current gradient;
  • stats: structure holding the output of the algorithm (GenericExecutionStats), which contains, among other things:
    • stats.dual_feas: norm of current projected gradient of Lagrangian;
    • stats.primal_feas: norm of the feasibility residual;
    • stats.iter: current iteration counter;
    • stats.objective: current objective function value;
    • stats.multipliers: current estimate of Lagrange multiplier associated with the equality constraint;
    • stats.status: current status of the algorithm. Should be :unknown unless the algorithm has attained a stopping criterion. Changing this to anything will stop the algorithm, but you should use :user to properly indicate the intention.
    • stats.elapsed_time: elapsed time in seconds.

Examples

using Percival, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3), x -> [x[1]], zeros(1), zeros(1))
stats = percival(nlp)

# output

"Execution stats: first-order stationary"
using Percival, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3), x -> [x[1]], zeros(1), zeros(1))
solver = PercivalSolver(nlp)
stats = solve!(solver, nlp)

# output

"Execution stats: first-order stationary"
source
Percival.percivalMethod
percival(nlp)

A factorization-free augmented Lagrangian for nonlinear optimization.

For advanced usage, first define a PercivalSolver to preallocate the memory used in the algorithm, and then call solve!:

solver = PercivalSolver(nlp)
solve!(solver, nlp)

Arguments

  • nlp::AbstractNLPModel{T, V} is the model to solve, see NLPModels.jl.

Keyword arguments

  • x::V = nlp.meta.x0: the initial guess;
  • atol::T = T(1e-8): absolute tolerance;
  • rtol::T = T(1e-8): relative tolerance;
  • ctol::T = T(1e-8): absolute tolerance on the feasibility;
  • max_eval::Int = 100000: maximum number of evaluation of the objective function;
  • max_time::Float64 = 30.0: maximum time limit in seconds;
  • max_iter::Int = 2000: maximum number of iterations;
  • verbose::Int = 0: if > 0, display iteration details every verbose iteration;
  • μ::Real = T(10.0): Starting value of the penalty parameter;
  • subsolver_logger::AbstractLogger = NullLogger(): logger passed to tron;
  • cgls_verbose::Int = 0: verbosity level in Krylov.cgls;
  • inity::Bool = false: If true the algorithm uses Krylov.cgls to compute an approximation, otherwise we use nlp.meta.y0;

other kwargs are passed to the subproblem solver.

The algorithm stops when $‖c(xᵏ)‖ ≤ ctol$ and $‖P∇L(xᵏ,λᵏ)‖ ≤ atol + rtol * ‖P∇L(x⁰,λ⁰)‖$ where $P∇L(x,λ) := Proj_{l,u}(x - ∇f(x) + ∇c(x)ᵀλ) - x$.

Output

The value returned is a GenericExecutionStats, see SolverCore.jl.

Callback

The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats), and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user will stop the algorithm. All relevant information should be available in nlp and solver. Notably, you can access, and modify, the following:

  • solver.x: current iterate;
  • solver.gx: current gradient;
  • stats: structure holding the output of the algorithm (GenericExecutionStats), which contains, among other things:
    • stats.dual_feas: norm of current projected gradient of Lagrangian;
    • stats.primal_feas: norm of the feasibility residual;
    • stats.iter: current iteration counter;
    • stats.objective: current objective function value;
    • stats.multipliers: current estimate of Lagrange multiplier associated with the equality constraint;
    • stats.status: current status of the algorithm. Should be :unknown unless the algorithm has attained a stopping criterion. Changing this to anything will stop the algorithm, but you should use :user to properly indicate the intention.
    • stats.elapsed_time: elapsed time in seconds.

Examples

using Percival, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3), x -> [x[1]], zeros(1), zeros(1))
stats = percival(nlp)

# output

"Execution stats: first-order stationary"
using Percival, ADNLPModels
nlp = ADNLPModel(x -> sum(x.^2), ones(3), x -> [x[1]], zeros(1), zeros(1))
solver = PercivalSolver(nlp)
stats = solve!(solver, nlp)

# output

"Execution stats: first-order stationary"
source
Percival.reset_subproblem!Method
reset_subproblem!(solver::PercivalSolver{T, V}, model::AbstractNLPModel{T, V})

Specialize SolverCore.reset! function to percival's context.

source
Percival.update_cx!Method
update_cx!(nlp, x)

Given an AugLagModel, if x != nlp.x, then updates the internal value nlp.cx calling cons on nlp.model, and reset nlp.fx to a NaN. Also updates nlp.μc_y.

source
Percival.update_fxcx!Method
update_fxcx!(nlp, x)

Given an AugLagModel, if x != nlp.x, then updates the internal value nlp.cx calling objcons on nlp.model. Also updates nlp.μc_y. Returns fx only.

source
Percival.update_y!Method
update_y!(nlp)

Given an AugLagModel, update nlp.y = -nlp.μc_y and updates nlp.μc_y accordingly.

source