Torch Named Linops
Welcome to the documentation for torch-named-linops, a linear operator
abstraction designed for matrix-free, large-scale numerical computing,
optimization, and machine learning.
Quickstart
Installation
Create and Apply a Linear Operator
# Import torch and the core linear operator classes
import torch
from torchlinops import Dense, Dim # Dense: matrix multiplication, Dim: named dimensions
# Define matrix dimensions
M, N = 3, 7 # M: output size, N: input size
# Create a random weight matrix for our linear operation
weight = torch.randn(M, N)
# Create a Dense linear operator with named dimensions
# Dense performs matrix-vector multiplication using the weight matrix
# weightshape=Dim("MN") names the weight matrix dimensions (M rows, N columns),
# ishape=Dim("N") sets the expected input shape,
# oshape=Dim("M") sets the expected output shape.
A = Dense(weight, weightshape=Dim("MN"), ishape=Dim("N"), oshape=Dim("M"))
# Create input data and apply the operator
x = torch.randn(N) # Random input vector of size N
y = A(x) # Apply the linear operator (equivalent to A @ x)
print(f"Input shape: {x.shape}, Output shape: {y.shape}")
Expected output:
Features
- Named dimensions -- A dedicated abstraction (
Dim,NamedDimension,NamedShape) for naming linear operator dimensions, eliminating shape ambiguity. - Automatic adjoints and normals --
.Hand.Nproperties to create adjoint (\(A^H\)) and normal (\(A^H A\)) operators with correct dimension handling. - Operator composition -- Compose operators with
@(chaining) and+(addition).Chain,Add,Concat, andStackhandle dimension matching automatically. - Core operator library --
Dense,Diagonal,FFT,NUFFT,Interpolate,ArrayToBlocks,Sampling, and more, all with named dimensions. - Multi-GPU splitting -- Split a single operator across multiple GPUs with
split_linopandcreate_batched_linop. - Complex number support -- Full support for complex tensors. Adjoint takes the conjugate transpose.
- Autograd integration -- Full support for
autograd-based automatic differentiation through all operators. - Iterative solvers -- Built-in
conjugate_gradients,power_method, andpolynomial_preconditionerthat work directly with named linops.
Other Interesting Packages
If you like this package, you may find these other packages interesting as well. Check them out! - SigPy - MIRTorch - SCICO - matmri - einops - torch_linops - PyLops - linear_operator