CGLS

Krylov.cglsFunction
(x, stats) = cgls(A, b::AbstractVector{FC};
                  M=I, λ::T=zero(T), atol::T=√eps(T), rtol::T=√eps(T),
                  radius::T=zero(T), itmax::Int=0, verbose::Int=0, history::Bool=false,
                  ldiv::Bool=false, callback=solver->false)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

Solve the regularized linear least-squares problem

minimize ‖b - Ax‖₂² + λ‖x‖₂²

using the Conjugate Gradient (CG) method, where λ ≥ 0 is a regularization parameter. This method is equivalent to applying CG to the normal equations

(AᴴA + λI) x = Aᴴb

but is more stable.

CGLS produces monotonic residuals ‖r‖₂ but not optimality residuals ‖Aᴴr‖₂. It is formally equivalent to LSQR, though can be slightly less accurate, but simpler to implement.

The callback is called as callback(solver) and should return true if the main loop should terminate, and false otherwise.

References

source
Krylov.cgls!Function
solver = cgls!(solver::CglsSolver, A, b; kwargs...)

where kwargs are keyword arguments of cgls.

See CglsSolver for more details about the solver.

source

CRLS

Krylov.crlsFunction
(x, stats) = crls(A, b::AbstractVector{FC};
                  M=I, λ::T=zero(T), atol::T=√eps(T), rtol::T=√eps(T),
                  radius::T=zero(T), itmax::Int=0, verbose::Int=0, history::Bool=false,
                  ldiv::Bool=false, callback=solver->false)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

Solve the linear least-squares problem

minimize ‖b - Ax‖₂² + λ‖x‖₂²

using the Conjugate Residuals (CR) method. This method is equivalent to applying MINRES to the normal equations

(AᴴA + λI) x = Aᴴb.

This implementation recurs the residual r := b - Ax.

CRLS produces monotonic residuals ‖r‖₂ and optimality residuals ‖Aᴴr‖₂. It is formally equivalent to LSMR, though can be substantially less accurate, but simpler to implement.

The callback is called as callback(solver) and should return true if the main loop should terminate, and false otherwise.

Reference

  • D. C.-L. Fong, Minimum-Residual Methods for Sparse, Least-Squares using Golubg-Kahan Bidiagonalization, Ph.D. Thesis, Stanford University, 2011.
source
Krylov.crls!Function
solver = crls!(solver::CrlsSolver, A, b; kwargs...)

where kwargs are keyword arguments of crls.

See CrlsSolver for more details about the solver.

source

LSLQ

Krylov.lslqFunction
(x, stats) = lslq(A, b::AbstractVector{FC};
                  M=I, N=I, sqd::Bool=false, λ::T=zero(T),
                  atol::T=√eps(T), btol::T=√eps(T), etol::T=√eps(T),
                  window::Int=5, utol::T=√eps(T), itmax::Int=0,
                  σ::T=zero(T), transfer_to_lsqr::Bool=false, 
                  conlim::T=1/√eps(T), verbose::Int=0, history::Bool=false,
                  ldiv::Bool=false, callback=solver->false)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

Solve the regularized linear least-squares problem

minimize ‖b - Ax‖₂² + λ²‖x‖₂²

using the LSLQ method, where λ ≥ 0 is a regularization parameter. LSLQ is formally equivalent to applying SYMMLQ to the normal equations

(AᴴA + λ²I) x = Aᴴb

but is more stable.

Main features

  • the solution estimate is updated along orthogonal directions
  • the norm of the solution estimate ‖xᴸₖ‖₂ is increasing
  • the error ‖eₖ‖₂ := ‖xᴸₖ - x*‖₂ is decreasing
  • it is possible to transition cheaply from the LSLQ iterate to the LSQR iterate if there is an advantage (there always is in terms of error)
  • if A is rank deficient, identify the minimum least-squares solution

Optional arguments

  • M: a symmetric and positive definite dual preconditioner
  • N: a symmetric and positive definite primal preconditioner
  • sqd indicates that we are solving a symmetric and quasi-definite system with λ=1

If λ > 0, we solve the symmetric and quasi-definite system

[ E      A ] [ r ]   [ b ]
[ Aᴴ  -λ²F ] [ x ] = [ 0 ],

where E and F are symmetric and positive definite. Preconditioners M = E⁻¹ ≻ 0 and N = F⁻¹ ≻ 0 may be provided in the form of linear operators. If sqd=true, λ is set to the common value 1.

The system above represents the optimality conditions of

minimize ‖b - Ax‖²_E⁻¹ + λ²‖x‖²_F.

For a symmetric and positive definite matrix K, the K-norm of a vector x is ‖x‖²_K = xᴴKx. LSLQ is then equivalent to applying SYMMLQ to (AᴴE⁻¹A + λ²F)x = AᴴE⁻¹b with r = E⁻¹(b - Ax).

If λ = 0, we solve the symmetric and indefinite system

[ E    A ] [ r ]   [ b ]
[ Aᴴ   0 ] [ x ] = [ 0 ].

The system above represents the optimality conditions of

minimize ‖b - Ax‖²_E⁻¹.

In this case, N can still be specified and indicates the weighted norm in which x and Aᴴr should be measured. r can be recovered by computing E⁻¹(b - Ax).

  • λ is a regularization parameter (see the problem statement above)
  • σ is an underestimate of the smallest nonzero singular value of A–-setting σ too large will result in an error in the course of the iterations
  • atol is a stopping tolerance based on the residual
  • btol is a stopping tolerance used to detect zero-residual problems
  • etol is a stopping tolerance based on the lower bound on the error
  • window is the number of iterations used to accumulate a lower bound on the error
  • utol is a stopping tolerance based on the upper bound on the error
  • transfer_to_lsqr return the CG solution estimate (i.e., the LSQR point) instead of the LQ estimate
  • itmax is the maximum number of iterations (0 means no imposed limit)
  • conlim is the limit on the estimated condition number of A beyond which the solution will be abandoned
  • verbose determines verbosity.

Return values

lslq returns the tuple (x, stats) where

  • x is the LQ solution estimate

  • stats collects other statistics on the run in a LSLQStats

  • stats.err_lbnds is a vector of lower bounds on the LQ error–-the vector is empty if window is set to zero

  • stats.err_ubnds_lq is a vector of upper bounds on the LQ error–-the vector is empty if σ == 0 is left at zero

  • stats.err_ubnds_cg is a vector of upper bounds on the CG error–-the vector is empty if σ == 0 is left at zero

  • stats.error_with_bnd is a boolean indicating whether there was an error in the upper bounds computation (cancellation errors, too large σ ...)

Stopping conditions

The iterations stop as soon as one of the following conditions holds true:

  • the optimality residual is sufficiently small (stats.status = "found approximate minimum least-squares solution") in the sense that either
    • ‖Aᴴr‖ / (‖A‖ ‖r‖) ≤ atol, or
    • 1 + ‖Aᴴr‖ / (‖A‖ ‖r‖) ≤ 1
  • an approximate zero-residual solution has been found (stats.status = "found approximate zero-residual solution") in the sense that either
    • ‖r‖ / ‖b‖ ≤ btol + atol ‖A‖ * ‖xᴸ‖ / ‖b‖, or
    • 1 + ‖r‖ / ‖b‖ ≤ 1
  • the estimated condition number of A is too large in the sense that either
    • 1/cond(A) ≤ 1/conlim (stats.status = "condition number exceeds tolerance"), or
    • 1 + 1/cond(A) ≤ 1 (stats.status = "condition number seems too large for this machine")
  • the lower bound on the LQ forward error is less than etol * ‖xᴸ‖
  • the upper bound on the CG forward error is less than utol * ‖xᶜ‖

The callback is called as callback(solver) and should return true if the main loop should terminate, and false otherwise.

References

source
Krylov.lslq!Function
solver = lslq!(solver::LslqSolver, A, b; kwargs...)

where kwargs are keyword arguments of lslq.

See LslqSolver for more details about the solver.

source

LSQR

Krylov.lsqrFunction
(x, stats) = lsqr(A, b::AbstractVector{FC};
                  M=I, N=I, sqd::Bool=false, λ::T=zero(T),
                  axtol::T=√eps(T), btol::T=√eps(T),
                  atol::T=zero(T), rtol::T=zero(T),
                  etol::T=√eps(T), window::Int=5,
                  itmax::Int=0, conlim::T=1/√eps(T),
                  radius::T=zero(T), verbose::Int=0, history::Bool=false,
                  ldiv::Bool=false, callback=solver->false)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

Solve the regularized linear least-squares problem

minimize ‖b - Ax‖₂² + λ²‖x‖₂²

using the LSQR method, where λ ≥ 0 is a regularization parameter. LSQR is formally equivalent to applying CG to the normal equations

(AᴴA + λ²I) x = Aᴴb

(and therefore to CGLS) but is more stable.

LSQR produces monotonic residuals ‖r‖₂ but not optimality residuals ‖Aᴴr‖₂. It is formally equivalent to CGLS, though can be slightly more accurate.

If λ > 0, LSQR solves the symmetric and quasi-definite system

[ E      A ] [ r ]   [ b ]
[ Aᴴ  -λ²F ] [ x ] = [ 0 ],

where E and F are symmetric and positive definite. Preconditioners M = E⁻¹ ≻ 0 and N = F⁻¹ ≻ 0 may be provided in the form of linear operators. If sqd=true, λ is set to the common value 1.

The system above represents the optimality conditions of

minimize ‖b - Ax‖²_E⁻¹ + λ²‖x‖²_F.

For a symmetric and positive definite matrix K, the K-norm of a vector x is ‖x‖²_K = xᴴKx. LSQR is then equivalent to applying CG to (AᴴE⁻¹A + λ²F)x = AᴴE⁻¹b with r = E⁻¹(b - Ax).

If λ = 0, we solve the symmetric and indefinite system

[ E    A ] [ r ]   [ b ]
[ Aᴴ   0 ] [ x ] = [ 0 ].

The system above represents the optimality conditions of

minimize ‖b - Ax‖²_E⁻¹.

In this case, N can still be specified and indicates the weighted norm in which x and Aᴴr should be measured. r can be recovered by computing E⁻¹(b - Ax).

The callback is called as callback(solver) and should return true if the main loop should terminate, and false otherwise.

Reference

source
Krylov.lsqr!Function
solver = lsqr!(solver::LsqrSolver, A, b; kwargs...)

where kwargs are keyword arguments of lsqr.

See LsqrSolver for more details about the solver.

source

LSMR

Krylov.lsmrFunction
(x, stats) = lsmr(A, b::AbstractVector{FC};
                  M=I, N=I, sqd::Bool=false, λ::T=zero(T),
                  axtol::T=√eps(T), btol::T=√eps(T),
                  atol::T=zero(T), rtol::T=zero(T),
                  etol::T=√eps(T), window::Int=5,
                  itmax::Int=0, conlim::T=1/√eps(T),
                  radius::T=zero(T), verbose::Int=0,
                  history::Bool=false, ldiv::Bool=false,
                  callback=solver->false)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

Solve the regularized linear least-squares problem

minimize ‖b - Ax‖₂² + λ²‖x‖₂²

using the LSMR method, where λ ≥ 0 is a regularization parameter. LSMR is formally equivalent to applying MINRES to the normal equations

(AᴴA + λ²I) x = Aᴴb

(and therefore to CRLS) but is more stable.

LSMR produces monotonic residuals ‖r‖₂ and optimality residuals ‖Aᴴr‖₂. It is formally equivalent to CRLS, though can be substantially more accurate.

LSMR can be also used to find a null vector of a singular matrix A by solving the problem min ‖Aᴴx - b‖ with any nonzero vector b. At a minimizer, the residual vector r = b - Aᴴx will satisfy Ar = 0.

If λ > 0, we solve the symmetric and quasi-definite system

[ E      A ] [ r ]   [ b ]
[ Aᴴ  -λ²F ] [ x ] = [ 0 ],

where E and F are symmetric and positive definite. Preconditioners M = E⁻¹ ≻ 0 and N = F⁻¹ ≻ 0 may be provided in the form of linear operators. If sqd=true, λ is set to the common value 1.

The system above represents the optimality conditions of

minimize ‖b - Ax‖²_E⁻¹ + λ²‖x‖²_F.

For a symmetric and positive definite matrix K, the K-norm of a vector x is ‖x‖²_K = xᴴKx. LSMR is then equivalent to applying MINRES to (AᴴE⁻¹A + λ²F)x = AᴴE⁻¹b with r = E⁻¹(b - Ax).

If λ = 0, we solve the symmetric and indefinite system

[ E    A ] [ r ]   [ b ]
[ Aᴴ   0 ] [ x ] = [ 0 ].

The system above represents the optimality conditions of

minimize ‖b - Ax‖²_E⁻¹.

In this case, N can still be specified and indicates the weighted norm in which x and Aᴴr should be measured. r can be recovered by computing E⁻¹(b - Ax).

The callback is called as callback(solver) and should return true if the main loop should terminate, and false otherwise.

Reference

source
Krylov.lsmr!Function
solver = lsmr!(solver::LsmrSolver, A, b; kwargs...)

where kwargs are keyword arguments of lsmr.

See LsmrSolver for more details about the solver.

source