## CGNE

`Krylov.cgne`

— Function```
(x, stats) = cgne(A, b::AbstractVector{FC};
N=I, λ::T=zero(T), atol::T=√eps(T), rtol::T=√eps(T),
itmax::Int=0, verbose::Int=0, history::Bool=false,
ldiv::Bool=false, callback=solver->false)
```

`T`

is an `AbstractFloat`

such as `Float32`

, `Float64`

or `BigFloat`

. `FC`

is `T`

or `Complex{T}`

.

Solve the consistent linear system

`Ax + √λs = b`

using the Conjugate Gradient (CG) method, where λ ≥ 0 is a regularization parameter. This method is equivalent to applying CG to the normal equations of the second kind

`(AAᴴ + λI) y = b`

but is more stable. When λ = 0, this method solves the minimum-norm problem

min ‖x‖₂ s.t. Ax = b.

When λ > 0, it solves the problem

`min ‖(x,s)‖₂ s.t. Ax + √λs = b.`

CGNE produces monotonic errors ‖x-x*‖₂ but not residuals ‖r‖₂. It is formally equivalent to CRAIG, though can be slightly less accurate, but simpler to implement. Only the x-part of the solution is returned.

A preconditioner N may be provided in the form of a linear operator.

The callback is called as `callback(solver)`

and should return `true`

if the main loop should terminate, and `false`

otherwise.

**References**

- J. E. Craig,
*The N-step iteration procedures*, Journal of Mathematics and Physics, 34(1), pp. 64–73, 1955. - J. E. Craig,
*Iterations Procedures for Simultaneous Equations*, Ph.D. Thesis, Department of Electrical Engineering, MIT, 1954.

`Krylov.cgne!`

— Function`solver = cgne!(solver::CgneSolver, A, b; kwargs...)`

where `kwargs`

are keyword arguments of `cgne`

.

See `CgneSolver`

for more details about the `solver`

.

## CRMR

`Krylov.crmr`

— Function```
(x, stats) = crmr(A, b::AbstractVector{FC};
N=I, λ::T=zero(T), atol::T=√eps(T),
rtol::T=√eps(T), itmax::Int=0, verbose::Int=0, history::Bool=false,
ldiv::Bool=false, callback=solver->false)
```

`T`

is an `AbstractFloat`

such as `Float32`

, `Float64`

or `BigFloat`

. `FC`

is `T`

or `Complex{T}`

.

Solve the consistent linear system

`Ax + √λs = b`

using the Conjugate Residual (CR) method, where λ ≥ 0 is a regularization parameter. This method is equivalent to applying CR to the normal equations of the second kind

`(AAᴴ + λI) y = b`

but is more stable. When λ = 0, this method solves the minimum-norm problem

`min ‖x‖₂ s.t. x ∈ argmin ‖Ax - b‖₂.`

When λ > 0, this method solves the problem

`min ‖(x,s)‖₂ s.t. Ax + √λs = b.`

CRMR produces monotonic residuals ‖r‖₂. It is formally equivalent to CRAIG-MR, though can be slightly less accurate, but simpler to implement. Only the x-part of the solution is returned.

A preconditioner N may be provided.

The callback is called as `callback(solver)`

and should return `true`

if the main loop should terminate, and `false`

otherwise.

**References**

- D. Orban and M. Arioli,
*Iterative Solution of Symmetric Quasi-Definite Linear Systems*, Volume 3 of Spotlights. SIAM, Philadelphia, PA, 2017. - D. Orban,
*The Projected Golub-Kahan Process for Constrained Linear Least-Squares Problems*. Cahier du GERAD G-2014-15, 2014.

`Krylov.crmr!`

— Function`solver = crmr!(solver::CrmrSolver, A, b; kwargs...)`

where `kwargs`

are keyword arguments of `crmr`

.

See `CrmrSolver`

for more details about the `solver`

.

## LNLQ

`Krylov.lnlq`

— Function```
(x, y, stats) = lnlq(A, b::AbstractVector{FC};
M=I, N=I, sqd::Bool=false, λ::T=zero(T), σ::T=zero(T),
atol::T=√eps(T), rtol::T=√eps(T), etolx::T=√eps(T), etoly::T=√eps(T), itmax::Int=0,
transfer_to_craig::Bool=true, verbose::Int=0, history::Bool=false,
ldiv::Bool=false, callback=solver->false)
```

`T`

is an `AbstractFloat`

such as `Float32`

, `Float64`

or `BigFloat`

. `FC`

is `T`

or `Complex{T}`

.

Find the least-norm solution of the consistent linear system

`Ax + λ²y = b`

using the LNLQ method, where λ ≥ 0 is a regularization parameter.

For a system in the form Ax = b, LNLQ method is equivalent to applying SYMMLQ to AAᴴy = b and recovering x = Aᴴy but is more stable. Note that y are the Lagrange multipliers of the least-norm problem

`minimize ‖x‖ s.t. Ax = b.`

If `λ > 0`

, LNLQ solves the symmetric and quasi-definite system

```
[ -F Aᴴ ] [ x ] [ 0 ]
[ A λ²E ] [ y ] = [ b ],
```

where E and F are symmetric and positive definite. Preconditioners M = E⁻¹ ≻ 0 and N = F⁻¹ ≻ 0 may be provided in the form of linear operators. If `sqd=true`

, `λ`

is set to the common value `1`

.

The system above represents the optimality conditions of

`min ‖x‖²_F + λ²‖y‖²_E s.t. Ax + λ²Ey = b.`

For a symmetric and positive definite matrix `K`

, the K-norm of a vector `x`

is `‖x‖²_K = xᴴKx`

. LNLQ is then equivalent to applying SYMMLQ to `(AF⁻¹Aᴴ + λ²E)y = b`

with `Fx = Aᴴy`

.

If `λ = 0`

, LNLQ solves the symmetric and indefinite system

```
[ -F Aᴴ ] [ x ] [ 0 ]
[ A 0 ] [ y ] = [ b ].
```

The system above represents the optimality conditions of

`minimize ‖x‖²_F s.t. Ax = b.`

In this case, `M`

can still be specified and indicates the weighted norm in which residuals are measured.

In this implementation, both the x and y-parts of the solution are returned.

`etolx`

and `etoly`

are tolerances on the upper bound of the distance to the solution ‖x-xₛ‖ and ‖y-yₛ‖, respectively. The bound is valid if λ>0 or σ>0 where σ should be strictly smaller than the smallest positive singular value. For instance σ:=(1-1e-7)σₘᵢₙ .

The callback is called as `callback(solver)`

and should return `true`

if the main loop should terminate, and `false`

otherwise.

**Reference**

- R. Estrin, D. Orban, M.A. Saunders,
*LNLQ: An Iterative Method for Least-Norm Problems with an Error Minimization Property*, SIAM Journal on Matrix Analysis and Applications, 40(3), pp. 1102–1124, 2019.

`Krylov.lnlq!`

— Function`solver = lnlq!(solver::LnlqSolver, A, b; kwargs...)`

where `kwargs`

are keyword arguments of `lnlq`

.

See `LnlqSolver`

for more details about the `solver`

.

## CRAIG

`Krylov.craig`

— Function```
(x, y, stats) = craig(A, b::AbstractVector{FC};
M=I, N=I, sqd::Bool=false, λ::T=zero(T), atol::T=√eps(T),
btol::T=√eps(T), rtol::T=√eps(T), conlim::T=1/√eps(T), itmax::Int=0,
verbose::Int=0, transfer_to_lsqr::Bool=false, history::Bool=false,
ldiv::Bool=false, callback=solver->false)
```

`T`

is an `AbstractFloat`

such as `Float32`

, `Float64`

or `BigFloat`

. `FC`

is `T`

or `Complex{T}`

.

Find the least-norm solution of the consistent linear system

`Ax + λ²y = b`

using the Golub-Kahan implementation of Craig's method, where λ ≥ 0 is a regularization parameter. This method is equivalent to CGNE but is more stable.

For a system in the form Ax = b, Craig's method is equivalent to applying CG to AAᴴy = b and recovering x = Aᴴy. Note that y are the Lagrange multipliers of the least-norm problem

`minimize ‖x‖ s.t. Ax = b.`

If `λ > 0`

, CRAIG solves the symmetric and quasi-definite system

```
[ -F Aᴴ ] [ x ] [ 0 ]
[ A λ²E ] [ y ] = [ b ],
```

where E and F are symmetric and positive definite. Preconditioners M = E⁻¹ ≻ 0 and N = F⁻¹ ≻ 0 may be provided in the form of linear operators. If `sqd=true`

, `λ`

is set to the common value `1`

.

The system above represents the optimality conditions of

`min ‖x‖²_F + λ²‖y‖²_E s.t. Ax + λ²Ey = b.`

For a symmetric and positive definite matrix `K`

, the K-norm of a vector `x`

is `‖x‖²_K = xᴴKx`

. CRAIG is then equivalent to applying CG to `(AF⁻¹Aᴴ + λ²E)y = b`

with `Fx = Aᴴy`

.

If `λ = 0`

, CRAIG solves the symmetric and indefinite system

```
[ -F Aᴴ ] [ x ] [ 0 ]
[ A 0 ] [ y ] = [ b ].
```

The system above represents the optimality conditions of

`minimize ‖x‖²_F s.t. Ax = b.`

In this case, `M`

can still be specified and indicates the weighted norm in which residuals are measured.

In this implementation, both the x and y-parts of the solution are returned.

`callback(solver)`

and should return `true`

if the main loop should terminate, and `false`

otherwise.

**References**

- C. C. Paige and M. A. Saunders,
*LSQR: An Algorithm for Sparse Linear Equations and Sparse Least Squares*, ACM Transactions on Mathematical Software, 8(1), pp. 43–71, 1982. - M. A. Saunders,
*Solutions of Sparse Rectangular Systems Using LSQR and CRAIG*, BIT Numerical Mathematics, 35(4), pp. 588–604, 1995.

`Krylov.craig!`

— Function`solver = craig!(solver::CraigSolver, A, b; kwargs...)`

where `kwargs`

are keyword arguments of `craig`

.

See `CraigSolver`

for more details about the `solver`

.

## CRAIGMR

`Krylov.craigmr`

— Function```
(x, y, stats) = craigmr(A, b::AbstractVector{FC};
M=I, N=I, sqd :: Bool=false, λ :: T=zero(T), atol :: T=√eps(T),
rtol::T=√eps(T), itmax::Int=0, verbose::Int=0, history::Bool=false,
ldiv::Bool=false, callback=solver->false)
```

`T`

is an `AbstractFloat`

such as `Float32`

, `Float64`

or `BigFloat`

. `FC`

is `T`

or `Complex{T}`

.

Solve the consistent linear system

`Ax + λ²y = b`

using the CRAIGMR method, where λ ≥ 0 is a regularization parameter. This method is equivalent to applying the Conjugate Residuals method to the normal equations of the second kind

`(AAᴴ + λ²I) y = b`

but is more stable. When λ = 0, this method solves the minimum-norm problem

`min ‖x‖ s.t. x ∈ argmin ‖Ax - b‖.`

If `λ > 0`

, CRAIGMR solves the symmetric and quasi-definite system

```
[ -F Aᴴ ] [ x ] [ 0 ]
[ A λ²E ] [ y ] = [ b ],
```

where E and F are symmetric and positive definite. Preconditioners M = E⁻¹ ≻ 0 and N = F⁻¹ ≻ 0 may be provided in the form of linear operators. If `sqd=true`

, `λ`

is set to the common value `1`

.

The system above represents the optimality conditions of

`min ‖x‖²_F + λ²‖y‖²_E s.t. Ax + λ²Ey = b.`

For a symmetric and positive definite matrix `K`

, the K-norm of a vector `x`

is `‖x‖²_K = xᴴKx`

. CRAIGMR is then equivalent to applying MINRES to `(AF⁻¹Aᴴ + λ²E)y = b`

with `Fx = Aᴴy`

.

If `λ = 0`

, CRAIGMR solves the symmetric and indefinite system

```
[ -F Aᴴ ] [ x ] [ 0 ]
[ A 0 ] [ y ] = [ b ].
```

The system above represents the optimality conditions of

`min ‖x‖²_F s.t. Ax = b.`

In this case, `M`

can still be specified and indicates the weighted norm in which residuals are measured.

CRAIGMR produces monotonic residuals ‖r‖₂. It is formally equivalent to CRMR, though can be slightly more accurate, and intricate to implement. Both the x- and y-parts of the solution are returned.

`callback(solver)`

and should return `true`

if the main loop should terminate, and `false`

otherwise.

**References**

- D. Orban and M. Arioli.
*Iterative Solution of Symmetric Quasi-Definite Linear Systems*, Volume 3 of Spotlights. SIAM, Philadelphia, PA, 2017. - D. Orban,
*The Projected Golub-Kahan Process for Constrained, Linear Least-Squares Problems*. Cahier du GERAD G-2014-15, 2014.

`Krylov.craigmr!`

— Function`solver = craigmr!(solver::CraigmrSolver, A, b; kwargs...)`

where `kwargs`

are keyword arguments of `craigmr`

.

See `CraigmrSolver`

for more details about the `solver`

.