DCISolver - Dynamic Control of Infeasibility Solver

DCI is a solver for equality-constrained nonlinear problems, i.e., optimization problems of the form

\[ \min_x \ f(x) \quad \text{s.t.} \quad c(x) = 0,\]

based on the paper

Bielschowsky, R. H., & Gomes, F. A. Dynamic control of infeasibility in equality constrained optimization. SIAM Journal on Optimization, 19(3), 1299-1325 (2008). 10.1007/s10589-020-00201-2

DCISolver is a JuliaSmoothOptimizers-compliant solver. It takes an AbstractNLPModel as an input and returns a GenericExecutionStats.

We refer to juliasmoothoptimizers.github.io for tutorials on the NLPModel API. This framework allows the usage of models from Ampl (using AmplNLReader.jl), CUTEst (using CUTEst.jl), JuMP (using NLPModelsJuMP.jl), PDE-constrained optimization problems (using PDENLPModels.jl) and models defined with automatic differentiation (using ADNLPModels.jl).

Installation

DCISolver is a registered package. To install this package, open the Julia REPL (i.e., execute the julia binary), type ] to enter package mode, and install DCISolver as follows

add DCISolver

The DCI algorithm is an iterative method that has the flavor of a projected gradient algorithm and could be characterized as a relaxed feasible point method with dynamic control of infeasibility. It is a combination of two steps: a tangent step and a feasibility step. It uses LDLFactorizations.jl by default to compute the factorization in the tangent step. Follow HSL.jl's MA57 installation for an alternative. The feasibility steps are factorization-free and use iterative methods from Krylov.jl.

Example

We consider in this example the minization of the Rosenbrock function over an equality constraint.

\[ \min_x \ 100 * (x₂ - x₁²)² + (x₁ - 1)² \quad \text{s.t.} \quad x₁x₂=1,\]

The problem is modeled using ADNLPModels.jl with [-1.2; 1.0] as default initial point, and then solved using dci.

using DCISolver, ADNLPModels, Logging
nlp = ADNLPModel(
  x -> 100 * (x[2] - x[1]^2)^2 + (x[1] - 1)^2,
  [-1.2; 1.0],
  x -> [x[1] * x[2] - 1],
  [0.0], [0.0],
  name = "Rosenbrock with x₁x₂=1"
)
stats = with_logger(NullLogger()) do
  dci(nlp)
end

println(stats)
Generic Execution stats
  status: first-order stationary
  objective value: 2.9976244322221733e-12
  primal feasibility: 3.4316988473115373e-7
  dual feasibility: 7.333598945847062e-5
  solution: [1.0000001718172222  1.000000171352633]
  multipliers: [-1.7400004246348217e-5]
  iterations: 12
  elapsed time: 0.712007999420166
  solver specific:
    lagrangian: -2.97353301931873e-12

Bug reports and discussions

If you think you found a bug, feel free to open an issue. Focused suggestions and requests can also be opened as issues. Before opening a pull request, start an issue or a discussion on the topic, please.

If you want to ask a question not suited for a bug report, feel free to start a discussion here. This forum is for general discussion about this repository and the JuliaSmoothOptimizers, so questions about any of our packages are welcome.