Conjugate Gradient
Preconditioned conjugate gradient solver for linear systems.
torchlinops.alg.conjugate_gradients
conjugate_gradients(
A: Callable,
y: Tensor,
x0: Optional[Tensor] = None,
max_num_iters: int = 20,
gtol: float = 0.001,
ltol: float = 1e-05,
disable_tracking: bool = False,
tqdm_kwargs: Optional[dict] = None,
) -> Tensor | None
Solve \(Ax = y\) with the conjugate gradient method.
\(A\) must be positive semidefinite (Hermitian). The algorithm iterates at most max_num_iters times or until both the loss-difference and gradient-norm convergence criteria are met.
| PARAMETER | DESCRIPTION |
|---|---|
A
|
Function implementing the matrix-vector product \(A(x)\). |
y
|
Right-hand side of the linear system.
TYPE:
|
x0
|
Initial guess. Defaults to the zero vector.
TYPE:
|
max_num_iters
|
Maximum number of CG iterations.
TYPE:
|
gtol
|
Convergence tolerance on the gradient norm \(\|Ax - y\|\).
TYPE:
|
ltol
|
Convergence tolerance on the absolute change in loss between successive iterations.
TYPE:
|
disable_tracking
|
If
TYPE:
|
tqdm_kwargs
|
Extra keyword arguments forwarded to
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Tensor or None
|
The approximate solution \(x\), or |