Reference

Contents

Index

DCISolver.TR_solversConstant
TR_solvers = Dict(:TR_lsmr => TR_lsmr_struct, :TR_dogleg => TR_dogleg_struct)

Dictonary of the possible structures for the trust-region step.

source
DCISolver.comp_λ_solversConstant
comp_λ_solvers = Dict(:cgls => comp_λ_cgls)

Dictonary of the possible structures for the computation of the Lagrange multipliers.

source
DCISolver.solver_correspondenceConstant
solver_correspondence = Dict(:ma57 => MA57Struct, :ldlfact => LDLFactorizationStruct)

Dictonary of the possible structures for the factorization.

source
DCISolver.MetaDCIType
MetaDCI(x, y; kwargs...)

Structure containing all the parameters used in the dci call. x is an intial guess, and y is an initial guess for the Lagrange multiplier. Returns a MetaDCI structure.

Arguments

The keyword arguments may include:

  • atol::T=T(1e-5): absolute tolerance.
  • rtol::T=T(1e-5): relative tolerance.
  • ctol::T=T(1e-5): feasibility tolerance.
  • unbounded_threshold::T=T(-1e5): below this threshold the problem is unbounded.
  • max_eval::Integer=50000: maximum number of cons + obj evaluations.
  • max_time::Float64=120.0: maximum number of seconds.
  • max_iter::Integer=500: maximum number of iterations.
  • max_iter_normal_step::Integer=typemax(Int): maximum number of iterations in normal step.
  • λ_struct::comp_λ_cgls=comp_λ_cgls(length(x0), length(y0), S).
  • linear_solver::Symbol=:ldlfact: Solver for the factorization. options: :ma57.
  • decrease_γ::T=T(0.1): Regularization for the factorization: reduce γ if possible, > √eps(T), between tangent steps.
  • increase_γ::T=T(100.0): Regularization for the factorization: up γ if possible, < 1/√eps(T), during the factorization.
  • δmin::T=√eps(T): Regularization for the factorization: smallest value of δ used for the regularization.
  • feas_step::Symbol=:feasibility_step: Normal step.
  • feas_η₁::T=T(1e-3): Feasibility step: decrease the trust-region radius when Ared/Pred < η₁.
  • feas_η₂::T=T(0.66): Feasibility step: increase the trust-region radius when Ared/Pred > η₂.
  • feas_σ₁::T=T(0.25): Feasibility step: decrease coefficient of the trust-region radius.
  • feas_σ₂::T=T(2.0): Feasibility step: increase coefficient of the trust-region radius.
  • feas_Δ₀::T=one(T): Feasibility step: initial radius.
  • bad_steps_lim::Integer=3: Feasibility step: consecutive bad steps before using a second order step.
  • feas_expected_decrease::T=T(0.95): Feasibility step: bad steps are when ‖c(z)‖ / ‖c(x)‖ >feas_expected_decrease.
  • TR_compute_step::Symbol=:TR_lsmr: Compute the direction in feasibility step: options: :TR_dogleg.
  • TR_struct::Union{TR_lsmr_struct, TR_dogleg_struct}=TR_lsmr_struct(length(x0), length(y0), S).
  • compρ_p1::T=T(0.75): update ρ as ρ = max(min(ngp, p1) * ρmax, ϵ).
  • compρ_p2::T=T(0.90): update ρ as ρ = primalnorm * p2 if not sufficiently feasible.
  • ρbar::T=T(2.0): radius of the larger cylinder is ρbar * ρ.
  • tan_Δ::T=one(T): Tangent step trust-region parameters: initial trust-region radius.
  • tan_η₁::T=T(1e-2): Tangent step trust-region parameters: decrease the trust-region radius when Ared/Pred < η₁.
  • tan_η₂::T=T(0.75): Tangent step trust-region parameters: increase the trust-region radius when Ared/Pred > η₂.
  • tan_σ₁::T=T(0.25): Tangent step trust-region parameters: decrease coefficient of the trust-region radius.
  • tan_σ₂::T=T(2.0): Tangent step trust-region parameters: increase coefficient of the trust-region radius.
  • tan_small_d::T=eps(T): Tangent step trust-region parameters: ||d|| is too small.
  • increase_Δtg::T=10: Tangent step trust-region parameters: increase if possible, < 1 / √eps(T), the Δtg between tangent steps.

For more details, we refer to the package documentation fine-tuneDCI.md.

source
DCISolver.SymCOOSolverType

An SymCOOSolver is an interface to allow simple usage of different solvers. Ideally, having rows, cols, vals and the dimension ndim of a symmetric matrix should allow the user to call M = LinearSolver(ndim, rows, cols, vals) factorize!(M) solve!(x, M, b) # Also x = M \ b Only the lower triangle of the matrix should be passed.

source
DCISolver.TR_dogleg_structType
`TR_dogleg_struct(m, n, ::DataType; kwargs...)`

Keyword arguments correspond to input parameters of lsmr from Krylov.jl used in the computation of the dogleg for the trust-region step. Returns a TR_dogleg_struct structure.

source
DCISolver.TR_lsmr_structType
`TR_lsmr_struct(m, n, ::DataType; kwargs...)`

Keyword arguments correspond to input parameters of lsmr from Krylov.jl used in the computation of the trust-region step. Returns a TR_lsmr_struct structure.

source
DCISolver.comp_λ_cglsType
`comp_λ_cgls(m, n, ::DataType; kwargs...)`

Keyword arguments correspond to input parameters of cgls from Krylov.jl used in the computation of the Lagrange multipliers. Returns a comp_λ_cgls structure.

source
Base.successFunction
success(M :: SymCOOSolver)

Returns whether factorize!(M) was successful.

source
DCISolver.TR_doglegMethod
TR_dogleg(cz, Jz, ctol, Δ, normcz, Jd, meta)

Compute a direction d such that

\[\begin{aligned} \min_{d} \quad & \|c + Jz' d \| \\ \text{s.t.} \quad & \|d\| \leq \Delta, \end{aligned}\]

using a dogleg.

Output

  • d: solution
  • Jd: product of the solution with J.
  • infeasible: true if the problem is infeasible.
  • solved: true if the problem has been successfully solved.
source
DCISolver.TR_lsmrMethod
TR_lsmr(cz, Jz, ctol, Δ, normcz, Jd, meta)

Compute a direction d such that

\[\begin{aligned} \min_{d} \quad & \|c + Jz' d \| \\ \text{s.t.} \quad & \|d\| \leq \Delta, \end{aligned}\]

using lsmr method from Krylov.jl.

Output

  • d: solution
  • Jd: product of the solution with J.
  • infeasible: true if the problem is infeasible.
  • solved: true if the problem has been successfully solved.
source
DCISolver._compute_gradient_step!Method
_compute_gradient_step!(nlp, gBg, g, Δ, dcp)

Solve the following problem

\[\begin{aligned} \min_{α} \quad & q(-α g) \\ \text{s.t.} \quad & \|αg\| \leq \Delta. \end{aligned}\]

Output

The returned arguments are:

  • dcp_on_boundary true if ‖αg‖ = Δ,
  • dcp = - α g,
  • dcpBdcp = α^2 gBg,
  • α the solution.
source
DCISolver._compute_newton_step!Method
_compute_newton_step!(nlp, LDL, g, γ, δ, dcp, vals, meta, workspace)

Compute a Newton direction dn via a factorization of an SQD matrix.

Output

  • dn: solution
  • dnBdn and dcpBdn: updated scalar products.
  • γ_too_large: true if the regularization is too large.
  • γ: updated value of the regularization γ.
  • δ: updated value of the regularization δ.
  • vals: updated value of the SQD matrix.
source
DCISolver._compute_step_lengthMethod
_compute_step_length(norm2dn, dotdndcp, norm2dcp, Δ)

Given two directions dcp and dn, compute the largest 0 ≤ τ ≤ 1 such that ‖dn + τ (dcp -dn)‖ = Δ. Returns τ.

source
DCISolver.compute_descent_direction!Method
(d, dBd, status, γ, δ, vals) = compute_descent_direction!(nlp::AbstractNLPModel, gBg, g, Δ, LDL, γ, δ, vals, d, meta, workspace)

Compute a direction d solution of

\[\begin{aligned} \min_{d} \quad & q(d) \\ \text{s.t.} \quad & \|d\| \leq \Delta. \end{aligned}\]

Output

The returned arguments are:

  • d: the computed direction.
  • dBd: updated scalar product.
  • status: computation status.
  • γ, δ: updated values for the regularization parameters.
  • vals: update values for the SQD system.

Return status has four possible outcomes:

  • :cauchy_step
  • :newton
  • :dogleg
  • :interior_cauchy_step when γ is too large.
source
DCISolver.compute_gBgMethod
compute_gBg(nlp, rows, cols, vals, ∇ℓzλ)

Compute gBg = ∇ℓxλ' * B * ∇ℓxλ, where B is a symmetric sparse matrix whose lower triangular is given in COO-format.

source
DCISolver.compute_lx!Method
compute_lx!(Jx, ∇fx, λ, meta)

Compute the solution of ‖Jx' λ - ∇fx‖ using solvers from Krylov.jl as defined by meta.λ_struct.comp_λ_solver. Return the solution λ.

source
DCISolver.compute_ρMethod
compute_ρ(dualnorm, primalnorm, norm∇fx, ρmax, ϵ, iter, meta::MetaDCI)
compute_ρ(dualnorm, primalnorm, norm∇fx, ρmax, ϵ, iter, p1, p2)

Update and return the trust-cylinder radius ρ.

Theory asks for ngp ρmax 10^-4 < ρ <= ngp ρmax There are no evaluations of functions here.

ρ = O(‖g_p(z)‖) and in the paper ρ = ν n_p(z) ρ_max with n_p(z) = norm(g_p(z)) / (norm(g(z)) + 1).

source
DCISolver.dciMethod
dci(nlp; kwargs...)
dci(nlp, x; kwargs...)
dci(nlp, meta, x)

Compute a local minimum of an equality-constrained optimization problem using DCI algorithm from Bielschowsky & Gomes (2008).

Arguments

  • nlp::AbstractNLPModel: the model solved, see NLPModels.jl.
  • x: Initial guess. If x is not specified, then nlp.meta.x0 is used.
  • meta: The keyword arguments are used to initialize a MetaDCI.

For advanced usage, the principal call to the solver uses a DCIWorkspace.

dci(nlp, meta, workspace)

Output

The returned value is a GenericExecutionStats, see SolverCore.jl.

References

This method implements the Dynamic Control of Infeasibility for equality-constrained problems described in

Dynamic Control of Infeasibility in Equality Constrained Optimization
Roberto H. Bielschowsky and Francisco A. M. Gomes
SIAM J. Optim., 19(3), 2008, 1299–1325.
https://doi.org/10.1137/070679557

Examples

julia> using DCISolver, ADNLPModels
julia> nlp = ADNLPModel(x -> 100 * (x[2] - x[1]^2)^2 + (x[1] - 1)^2, [-1.2; 1.0]);
julia> stats = dci(nlp)
"Execution stats: first-order stationary"
source
DCISolver.factorize!Function
factorize!(M :: SymCOOSolver)

Calls the factorization of the symmetric solver given by M. Use success(M) to check whether the factorization was successful.

source
DCISolver.feasibility_stepMethod
feasibility_step(nls, x, cx, normcx, Jx, ρ, ctol, meta, workspace; kwargs...)

Approximately solves min ‖c(x)‖ using a trust-region Levenberg-Marquardt method, i.e. given xₖ, finds min ‖cₖ + Jₖd‖.

Arguments

  • η₁::AbstractFloat = meta.feas_η₁: decrease the trust-region radius when Ared/Pred < η₁.
  • η₂::AbstractFloat = meta.feas_η₂: increase the trust-region radius when Ared/Pred > η₂.
  • σ₁::AbstractFloat = meta.feas_σ₁: decrease coefficient of the trust-region radius.
  • σ₂::AbstractFloat = meta.feas_σ₂:increase coefficient of the trust-region radius.
  • Δ₀::T = meta.feas_Δ₀: initial trust-region radius.
  • bad_steps_lim::Integer = meta.bad_steps_lim: consecutive bad steps before using a second order step.
  • expected_decrease::T = meta.feas_expected_decrease: bad steps are when ‖c(z)‖ / ‖c(x)‖ >feas_expected_decrease.
  • max_eval::Int = 1_000: maximum evaluations.
  • max_time::AbstractFloat = 60.0: maximum time.
  • max_feas_iter::Int = typemax(Int64): maximum number of iterations.

Output

  • z, cz, normcz, Jz: the new iterate, and updated evaluations.
  • status: Computation status. Possible outcomes are: :success, max_eval, max_time, max_iter, unknown_tired, :infeasible, :unknown.
source
DCISolver.normal_step!Method
normal_step!(nlp, x, cx, Jx, fx, ∇fx, λ, ℓxλ, ∇ℓxλ, dualnorm, primalnorm, ρmax, ϵp, max_eval, max_time, max_iter, meta, workspace)

Normal step: find z such that ||h(z)|| ≤ ρ where `` is the trust-cylinder radius.

Output

  • z, cz, fz, ℓzλ, ∇ℓzλ: the new iterate, and updated evaluations.
  • ρ: updated trust-cylinder radius.
  • primalnorm, dualnorm: updated primal and dual feasibility norms.
  • status: Computation status. The possible outcomes are: :init_success, :success, :max_eval, :max_time, :max_iter, :unknown_tired, :infeasible, :unknown.
source
DCISolver.regularized_coo_saddle_system!Method
regularized_coo_saddle_system!(nlp, rows, cols, vals, γ = γ, δ = δ)

Compute the structure for the saddle system [H + γI [Jᵀ]; J -δI] in COO-format (rows, cols, vals) in the following order: H, J, γ, -δ,.

source
DCISolver.solve!Function
solve!(x, M :: SymCOOSolver, b)

Solve the system $M x = b$. factorize!(M) should be called first.

source
DCISolver.tangent_step!Method
(z, cz, fz, status, Δ, Δℓ, γ, δ) = tangent_step!(nlp, z, λ, cz, normcz, fz, LDL, vals, g, ℓzλ, gBg, ρ, γ, δ, meta, workspace; kwargs...)

Solve the following problem

\[\begin{aligned} \min_{d} \quad & q(d):=\frac{1}{2} d^T B d + d^T g \\ \text{s.t.} \quad & Ad = 0, & \|d\| \leq \Delta, \end{aligned}\]

where B is an approximation of the Hessian of the Lagrangian, A is the jacobian matrix, and g is the projected gradient.

Output

  • z, cz, and fz: the new iterate, and updated evaluations.
  • status: computation status.
  • Δ: trust-region parameter.
  • Δℓ: differential in Lagrangian computation.
  • γ, δ: updated values for the regularization parameters.

Return status with possible outcomes:

  • :cauchy_step, :newton, :dogleg if successful step.
  • :unknown if we didn't enter the loop.
  • :small_horizontal_step.
  • :tired if we stop due to max_eval or max_time.
  • :success if we computed z such that ‖c(z)‖ ≤ meta.ρbar * ρ and Δℓ ≥ η₁ q(d).

See SolverTools.jl for SolverTools.aredpred

source