Index
torchlinops.linops
Add
Bases: Threadable, NamedLinop
The sum of one or more linear operators.
Inherits from Threadable to support parallel execution of sub-linops.
When threaded=True (default), each sub-linop is executed in parallel
using a ThreadPoolExecutor, which is useful for I/O-bound operations or
operations that release the GIL (e.g., PyTorch tensor operations).
Note that shared linops (e.g., Add(A, A)) are automatically shallow-
copied to ensure independent identity for threading, while still sharing
tensor data. See Threadable for details.
| ATTRIBUTE | DESCRIPTION |
|---|---|
linops |
The list of linops being added together.
TYPE:
|
threaded |
Whether to run sub-linops in parallel. Default is True.
TYPE:
|
num_workers |
Number of worker threads. If None, defaults to the number of sub-linops.
TYPE:
|
Source code in src/torchlinops/linops/add.py
24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 | |
__init__
| PARAMETER | DESCRIPTION |
|---|---|
*linops
|
The linear operators to be added together.
TYPE:
|
Source code in src/torchlinops/linops/add.py
ArrayToBlocks
Bases: NamedLinop
Extract sliding windows from an array.
Adjoint of BlocksToArray.
Source code in src/torchlinops/linops/array_to_blocks.py
__init__
__init__(
grid_size: tuple[int, ...],
block_size: tuple[int, ...],
stride: tuple[int, ...],
mask: Optional[Tensor] = None,
batch_shape: Optional[Shape] = None,
array_shape: Optional[Shape] = None,
blocks_shape: Optional[Shape] = None,
)
| PARAMETER | DESCRIPTION |
|---|---|
grid_size
|
Size of the input array spatial dimensions. |
block_size
|
Size of each extracted block. |
stride
|
Stride between consecutive blocks. |
mask
|
Boolean mask selecting a subset of blocks.
TYPE:
|
batch_shape
|
Named shape for batch dimensions.
TYPE:
|
array_shape
|
Named shape for the input array dimensions.
TYPE:
|
blocks_shape
|
Named shape for the output block dimensions.
TYPE:
|
Source code in src/torchlinops/linops/array_to_blocks.py
BatchSpec
dataclass
Specification for splitting and distributing a linop across devices.
| PARAMETER | DESCRIPTION |
|---|---|
batch_sizes
|
Mapping from dimension names to chunk sizes for tiling.
TYPE:
|
device_matrix
|
Array of
TYPE:
|
base_device
|
The device where input/output data resides. Default is CPU.
TYPE:
|
Source code in src/torchlinops/linops/split.py
BlocksToArray
Bases: NamedLinop
Compose several equally-sized blocks into a larger array.
Adjoint of ArrayToBlocks.
Source code in src/torchlinops/linops/array_to_blocks.py
112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 | |
__init__
__init__(
grid_size: tuple[int, ...],
block_size: tuple[int, ...],
stride: tuple[int, ...],
mask: Optional[Tensor] = None,
batch_shape: Optional = None,
blocks_shape: Optional = None,
array_shape: Optional = None,
)
| PARAMETER | DESCRIPTION |
|---|---|
grid_size
|
Size of the output array spatial dimensions. |
block_size
|
Size of each block. |
stride
|
Stride between consecutive blocks. |
mask
|
Boolean mask selecting a subset of blocks.
TYPE:
|
batch_shape
|
Named shape for batch dimensions.
TYPE:
|
blocks_shape
|
Named shape for the input block dimensions.
TYPE:
|
array_shape
|
Named shape for the output array dimensions.
TYPE:
|
Source code in src/torchlinops/linops/array_to_blocks.py
BreakpointLinop
Bases: NamedLinop
Debugging identity operator that drops into pdb on forward/adjoint.
Useful for inspecting intermediate tensor values inside a Chain.
Source code in src/torchlinops/linops/breakpt.py
Chain
Bases: NamedLinop
Composition (sequential application) of named linear operators.
If Chain(A, B, C) is created, then the forward pass applies
\(A\) first, then \(B\), then \(C\): mathematically the operator is \(C B A\).
| ATTRIBUTE | DESCRIPTION |
|---|---|
linops |
The constituent linops in execution order (inner to outer).
TYPE:
|
Source code in src/torchlinops/linops/chain.py
24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 | |
__init__
| PARAMETER | DESCRIPTION |
|---|---|
*linops
|
Linops in order of execution. If
TYPE:
|
name
|
Display name for this chain.
TYPE:
|
Source code in src/torchlinops/linops/chain.py
__setattr__
Bypasses pytorch's setattr, just for linops
adj_split
staticmethod
Split an adjoint linop into sub-linops.
| PARAMETER | DESCRIPTION |
|---|---|
chain
|
The chain linop to split.
TYPE:
|
tile
|
Dictionary specifying how to slice the linop dimensions
TYPE:
|
Source code in src/torchlinops/linops/chain.py
normal
Compute the normal operator by folding through the chain.
For a chain \(C B A\), the normal is computed as
\(A^H (B^H (C^H C (B (A \cdot))))\) by iterating linop.normal(inner)
in reverse order. This enables Toeplitz embedding and other per-linop
normal optimizations to compose correctly.
| PARAMETER | DESCRIPTION |
|---|---|
inner
|
An inner operator seeded from an outer chain or
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
NamedLinop
|
The composed normal operator. |
Source code in src/torchlinops/linops/chain.py
split
staticmethod
Split a linop into sub-linops.
| PARAMETER | DESCRIPTION |
|---|---|
chain
|
The chain linop to split.
TYPE:
|
tile
|
Dictionary specifying how to slice the linop dimensions
TYPE:
|
Source code in src/torchlinops/linops/chain.py
split_forward
Split each constituent linop according to per-linop batch slices.
| PARAMETER | DESCRIPTION |
|---|---|
ibatches
|
Per-linop input slices. Each element is a list of slices corresponding to the input dimensions of one linop in the chain. |
obatches
|
Per-linop output slices. Each element is a list of slices corresponding to the output dimensions of one linop in the chain. |
| RETURNS | DESCRIPTION |
|---|---|
Chain
|
A new chain of the split sub-linops. |
Source code in src/torchlinops/linops/chain.py
Concat
Bases: Threadable, NamedLinop
Concatenate some linops along an existing dimension.
Linops need not output tensors of the same size, but they should output tensors of the same number of dimensions.
Stacking type depends on dimensions provided:
Horizontal stacking (stacking along an input dimension)::
A B C
Vertical stacking (stacking along an output dimension)::
A
B
C
Diagonal stacking (stacking along separate input and output dimensions)::
A . .
. B .
. . C
Inherits from Threadable to support parallel execution of sub-linops.
When threaded=True (default), each sub-linop is executed in parallel
using a ThreadPoolExecutor.
Note that shared linops (e.g., Concat(A, A, idim="x")) are automatically
shallow-copied to ensure independent identity for threading, while still
sharing tensor data. See Threadable for details.
| ATTRIBUTE | DESCRIPTION |
|---|---|
linops |
The list of linops being concatenated.
TYPE:
|
threaded |
Whether to run sub-linops in parallel. Default is True.
TYPE:
|
num_workers |
Number of worker threads. If None, defaults to the number of sub-linops.
TYPE:
|
idim |
Input dimension along which to concatenate.
TYPE:
|
odim |
Output dimension along which to concatenate.
TYPE:
|
Source code in src/torchlinops/linops/concat.py
38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 | |
__init__
__init__(
*linops,
idim: Optional[NamedDimension | str] = None,
odim: Optional[NamedDimension | str] = None,
**kwargs,
)
| PARAMETER | DESCRIPTION |
|---|---|
*linops
|
The linops to concatenate.
TYPE:
|
idim
|
Input dimension along which to concatenate. If
TYPE:
|
odim
|
Output dimension along which to concatenate. If
TYPE:
|
Source code in src/torchlinops/linops/concat.py
spinoff
Helper function for creating a new linop using the provided inputs.
Preserves settings from the original linop.
Source code in src/torchlinops/linops/concat.py
split_forward
Split concat linop, making a new concat linop if necessary
Source code in src/torchlinops/linops/concat.py
subslice
staticmethod
Given a slice over some dims of a concat linop, return a mapping from the linop index to the relevant sub-slice for that linop.
Source code in src/torchlinops/linops/concat.py
Dense
Bases: NamedLinop
Dense matrix-vector multiply.
"Dense" is used to distinguish from "sparse" linear operators. This
operator performs a matrix-vector multiplication, potentially with batch
and broadcast dimensions, implemented via einops.einsum.
The core operation is:
\(y_{o\dots} = \sum_{i\dots} W_{i\dots, o\dots} x_{i\dots}\)
where \(x\) is the input, \(W\) is the weight matrix, and \(y\) is the output. \(i\dots\) and \(o\dots\) represent the input and output dimensions involved in the multiplication. Other dimensions are treated as batch or broadcast dimensions.
Examples:
A simple batched multiplication:
- Input \(x\) shape: \((A, N_x, N_y)\)
- Weight \(W\) shape: \((A, T)\)
- Output \(y\) shape: \((T, N_x, N_y)\)
Here, \(A\) is the input feature dimension, \(T\) is the output feature dimension, and \((N_x, N_y)\) are broadcast dimensions. The operation is:
\(y_{t, n_x, n_y} = \sum_{a} W_{a, t} x_{a, n_x, n_y}\)
Another example with a batch dimension \(C\) shared between input and weights:
- Input \(x\) shape: \((C, A, N_x, N_y)\)
- Weight \(W\) shape: \((C, A, A_1)\)
- Output \(y\) shape: \((C, A_1, N_x, N_y)\)
The operation is:
\(y_{c, a_1, n_x, n_y} = \sum_{a} W_{c, a, a_1} x_{c, a, n_x, n_y}\)
Source code in src/torchlinops/linops/dense.py
16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 | |
__init__
__init__(
weight: Tensor,
weightshape: Shape,
ishape: Shape,
oshape: Shape,
broadcast_dims: Optional[list] = None,
)
| PARAMETER | DESCRIPTION |
|---|---|
weight
|
The dense matrix used for this linop.
TYPE:
|
weightshape
|
The shape of the matrix, in symbolic form.
TYPE:
|
ishape
|
The input shape of the matrix.
TYPE:
|
oshape
|
The output shape of the matrix.
TYPE:
|
broadcast_dims
|
A list of the dimensions of weight that are intended to be broadcasted over the input. As such, they are excluded from splitting.
TYPE:
|
Source code in src/torchlinops/linops/dense.py
normal
Compute the normal operator (adjoint times forward).
| PARAMETER | DESCRIPTION |
|---|---|
inner
|
An optional inner operator to sandwich between the adjoint and forward. If None, consolidates two Dense operators into a single Dense.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
NamedLinop
|
The normal operator. |
Notes
If inner is None, consolidate two Dense's into a single Dense ishape: [A B X Y] oshape: [C D X Y] wshape: [A B C D]
Needs to become ishape: [A B X Y] oshape: [A1 B1 X Y] wshape: [A B A1 B1]
New weight is attained as einsum(weight.conj(), weight, 'A1 B1 C D, A B C D -> A B A1 B1')
ishape: [C A] oshape: [C1 A] wshape = [C C1]
Needs to become ishape: [C A] oshape: [C2 A] wshape = [C C2]
einsum(weight.conj(), weight, 'C1 C2, C C1 -> C C2')
Source code in src/torchlinops/linops/dense.py
131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 | |
DeviceSpec
dataclass
Lightweight data structure for holding useful CUDA-related objects for multi-GPU computation.
| ATTRIBUTE | DESCRIPTION |
|---|---|
device |
The device for computation and transfers.
TYPE:
|
compute_stream |
Stream used for computation on this device. Set automatically by
TYPE:
|
transfer_stream |
Stream used for data transfers to/from this device. Obtained from a registry to enable stream reuse across transfers.
TYPE:
|
| METHOD | DESCRIPTION |
|---|---|
p2p_setup |
Configure compute and transfer streams for peer-to-peer transfers. |
get_transfer_stream |
Get or create a transfer stream for a source/target device pair. |
Source code in src/torchlinops/linops/device.py
compute_stream
class-attribute
instance-attribute
Stream used for computation.
device
class-attribute
instance-attribute
Device for the streams.
transfer_stream
class-attribute
instance-attribute
Stream used for data transfer.
__post_init__
get_transfer_stream
staticmethod
Return the stream used for device transfers associated with this device.
Streams are cached in a registry to enable reuse. Each source/target device pair gets a dedicated transfer stream.
| PARAMETER | DESCRIPTION |
|---|---|
source_device
|
The source device for transfers.
TYPE:
|
target_device
|
The target device for transfers.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Stream
|
A CUDA stream for performing transfers. |
Source code in src/torchlinops/linops/device.py
p2p_setup
Sets up compute and transfer streams for peer2peer transfers, if not set yet.
| PARAMETER | DESCRIPTION |
|---|---|
other_device
|
The other device involved in the peer-to-peer transfer.
TYPE:
|
Source code in src/torchlinops/linops/device.py
Diagonal
Bases: NamedLinop
Elementwise diagonal linear operator \(D(x) = w \odot x\).
The forward operation is pointwise multiplication by a weight tensor w. The adjoint is \(D^H(x) = \bar{w} \odot x\) and the normal is \(D^N(x) = |w|^2 \odot x\).
Because the input and output shapes are identical, Diagonal sets
oshape = ishape and keeps them synchronized.
| ATTRIBUTE | DESCRIPTION |
|---|---|
weight |
The diagonal weight tensor \(w\).
TYPE:
|
broadcast_dims |
Dimensions along which the weight is broadcast (not stored explicitly).
TYPE:
|
Source code in src/torchlinops/linops/diagonal.py
15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 | |
__init__
__init__(
weight: Tensor,
ioshape: Optional[Shape] = None,
broadcast_dims: Optional[Shape] = None,
)
| PARAMETER | DESCRIPTION |
|---|---|
weight
|
The diagonal weight tensor.
TYPE:
|
ioshape
|
Named dimensions for input and output (they are the same).
TYPE:
|
broadcast_dims
|
Dimensions along which weight should be broadcast rather than indexed. Useful when the weight has fewer dimensions than the input.
TYPE:
|
Source code in src/torchlinops/linops/diagonal.py
from_weight
classmethod
from_weight(
weight: Tensor,
weight_shape: Shape,
ioshape: Shape,
shape_kwargs: Optional[dict] = None,
)
Construct a Diagonal by expanding weight to match ioshape via einops.
| PARAMETER | DESCRIPTION |
|---|---|
weight
|
The weight tensor in its original (possibly lower-dimensional) shape.
TYPE:
|
weight_shape
|
Named dimensions labeling the axes of weight.
TYPE:
|
ioshape
|
Target named dimensions for the expanded weight.
TYPE:
|
shape_kwargs
|
Extra keyword arguments forwarded to
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Diagonal
|
A new diagonal linop with the expanded weight. |
Source code in src/torchlinops/linops/diagonal.py
FFT
Bases: NamedLinop
\(n\)-dimensional Fast Fourier Transform as a named linear operator.
With norm="ortho" (the default), the FFT is unitary: \(F^H F = I\).
This means the normal operator is the identity and the adjoint is the
inverse FFT.
| ATTRIBUTE | DESCRIPTION |
|---|---|
ndim |
Number of spatial dimensions to transform.
TYPE:
|
norm |
FFT normalization mode.
TYPE:
|
centered |
Whether to treat the array center as the origin (sigpy convention).
TYPE:
|
Source code in src/torchlinops/linops/fft.py
13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 | |
__init__
__init__(
ndim: int,
batch_shape: Optional[Shape] = None,
grid_shapes: Optional[tuple[Shape, Shape]] = None,
norm: Optional[str] = "ortho",
centered: bool = False,
)
| PARAMETER | DESCRIPTION |
|---|---|
ndim
|
Number of dimensions to transform (1, 2, or 3).
TYPE:
|
batch_shape
|
Named batch dimensions prepended to the grid dimensions. Defaults to an empty shape.
TYPE:
|
grid_shapes
|
Pair of shapes |
norm
|
Normalization applied to the FFT. Only
TYPE:
|
centered
|
If
TYPE:
|
Source code in src/torchlinops/linops/fft.py
normal
Return the normal operator \(F^H F\).
With orthonormal normalization, \(F^H F = I\), so this returns an
Identity when no inner operator is provided.
| PARAMETER | DESCRIPTION |
|---|---|
inner
|
Inner operator for Toeplitz embedding.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
NamedLinop
|
|
Source code in src/torchlinops/linops/fft.py
split_forward
Identity
Bases: NamedLinop
Identity operator \(I(x) = x\).
Returns the input unchanged. The adjoint, normal, and any power of the identity are also the identity.
Source code in src/torchlinops/linops/identity.py
Interpolate
Bases: NamedLinop
Interpolate from a grid to a set of off-grid points.
Input/output pattern::
(batch_shape, grid_shape) -> (batch_shape, locs_batch_shape)
| ATTRIBUTE | DESCRIPTION |
|---|---|
locs |
The target interpolation locations.
TYPE:
|
grid_size |
The expected input grid size. |
interp_params |
Dictionary of arguments for interpolation kernel.
TYPE:
|
Source code in src/torchlinops/linops/interp.py
16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 | |
__init__
__init__(
locs: Float[Tensor, "... D"],
grid_size: tuple[int, ...],
batch_shape: Optional[Shape] = None,
locs_batch_shape: Optional[Shape] = None,
grid_shape: Optional[Shape] = None,
width: float = 4.0,
kernel: str = "kaiser_bessel",
norm: int = 1,
pad_mode: str = "circular",
kernel_params: Optional[dict] = None,
)
| PARAMETER | DESCRIPTION |
|---|---|
locs
|
The target interpolation locations, as a tensor of size (*locs_batch_size, num_dimensions). Uses 'ij' indexing.
TYPE:
|
grid_size
|
The expected input grid size. Should have same length as number of dimensions. |
batch_shape
|
The input/output batch shape. Defaults to "...".
TYPE:
|
locs_batch_shape
|
The shape of the locs. Defaults to "...".
TYPE:
|
grid_shape
|
The shape of the grid. Defaults to "...".
TYPE:
|
width
|
The width of the interpolation kernel.
TYPE:
|
kernel
|
The type of kernel to use. Current options are "kaiser_bessel" and "spline".
TYPE:
|
norm
|
The type of norm to use to measure distances. Current options are 1 and 2
TYPE:
|
pad_mode
|
The type of padding to apply.
TYPE:
|
Source code in src/torchlinops/linops/interp.py
split_locs
Can only split on locs dimensions
Source code in src/torchlinops/linops/interp.py
ND
dataclass
Fundamental named dimension type used throughout the library.
Each dimension has a name and an optional integer index i for
creating indexed variants (e.g. A1, A2). Two
NamedDimension instances are considered equal when their string
representations match; the index is folded into the representation
rather than compared separately.
| PARAMETER | DESCRIPTION |
|---|---|
name
|
The base name of the dimension (e.g.
TYPE:
|
i
|
Integer index for indexed variants. Defaults to
TYPE:
|
Examples:
Source code in src/torchlinops/nameddim/_nameddim.py
67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 | |
__eq__
__hash__
infer
classmethod
Create a NamedDimension by inferring the name and optional index.
If dim is already a NamedDimension it is returned as-is.
A two-character string whose second character is a digit is
interpreted as name=dim[0], i=int(dim[1]). Sequences are
inferred element-wise.
| PARAMETER | DESCRIPTION |
|---|---|
dim
|
A
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
NamedDimension or sequence thereof
|
The inferred dimension(s). |
Source code in src/torchlinops/nameddim/_nameddim.py
next_unused
Get the next dim by index that does not occur in tup
NS
Bases: NamedDimCollection
A linop shape with input and output dimensions Inherit from this to define custom behavior - e.g. splitting ishape and oshape into subparts that are linked
Source code in src/torchlinops/nameddim/_namedshape.py
13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 | |
__init__
Construct a NamedShape from input and output dimension names.
| PARAMETER | DESCRIPTION |
|---|---|
ishape
|
Input dimension names. If a
TYPE:
|
oshape
|
Output dimension names. If
TYPE:
|
**other_shapes
|
Additional named shape sequences stored alongside ishape and oshape (e.g. auxiliary dimensions for specialised operators).
DEFAULT:
|
Source code in src/torchlinops/nameddim/_namedshape.py
adjoint
Return a new NamedShape with ishape and oshape swapped.
Override this method in subclasses that need custom adjoint behaviour (e.g. swapping auxiliary shapes as well).
| RETURNS | DESCRIPTION |
|---|---|
NamedShape
|
A new instance with |
Source code in src/torchlinops/nameddim/_namedshape.py
normal
Return the NamedShape for the normal operator (A^H A).
The resulting shape has ishape equal to the original ishape
and oshape derived from ishape with indices incremented to
avoid collisions, representing the domain-to-domain mapping of the
normal equation.
| RETURNS | DESCRIPTION |
|---|---|
NamedShape
|
A new instance representing the normal operator shape. |
Source code in src/torchlinops/nameddim/_namedshape.py
NUFFT
Bases: Chain
Non-uniform Fast Fourier Transform (type II) as a named linear operator.
Implemented as a Chain of zero-padding, FFT, and interpolation. Supports
forward (image-to-kspace) and adjoint (kspace-to-image) operations.
| ATTRIBUTE | DESCRIPTION |
|---|---|
ndim |
Number of spatial dimensions.
TYPE:
|
oversamp |
Oversampling factor for the padded grid.
TYPE:
|
width |
Interpolation kernel width.
TYPE:
|
Source code in src/torchlinops/linops/nufft.py
34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 | |
__init__
__init__(
locs: Float[Tensor, "... D"],
grid_size: tuple[int, ...],
output_shape: Shape,
input_shape: Optional[Shape] = None,
input_kshape: Optional[Shape] = None,
batch_shape: Optional[Shape] = None,
oversamp: float = 1.25,
width: float = 4.0,
mode: Literal[
"interpolate", "sampling"
] = "interpolate",
do_prep_locs: bool = True,
apodize_weights: Optional[Float[Tensor, ...]] = None,
**options,
)
| PARAMETER | DESCRIPTION |
|---|---|
locs
|
Shape [... D] Tensor where last dimension is the spatial dimension. locs[..., i] Should be in the range [-N//2, N//2] where N is the grid_size[i], i.e. the grid size associated with that dimension |
grid_size
|
The expected spatial dimension of the input tensor.
TYPE:
|
output_shape
|
TYPE:
|
input_shape
|
TYPE:
|
input_kshape
|
TYPE:
|
batch_shape
|
NUFFT is implemented as a chain of padding, FFT, and interpolation Named Dimensions are set as follows: Pad: (batch_shape, input_shape) -> (batch_shape, next_unused(input_shape)) FFT: (batch_shape, next_unused(input_shape)) -> (batch_shape, input_kshape) Interp: (batch_shape, input_kshape) -> (batch_shape, output_shape)
TYPE:
|
oversamp
|
Oversampling factor for fourier domain grid
TYPE:
|
width
|
Width of kernel to use for interpolation
TYPE:
|
mode
|
TYPE:
|
do_prep_locs
|
Whether to scale, shift, and clamp the locs to be amenable to interpolation By default (=True), assumes the locs lie in [-N/2, N/2] Scales, shifts and clamps them them to [0, oversamp*N - 1] If False, does not do this, which can have some benefits for memory reasons
TYPE:
|
apodize_weights
|
Provide apodization weights Only relevant for "intepolate" mode Can have memory benefits |
**options
|
Additional options toeplitz : bool If True, normal() performs toeplitz embedding calculation toeplitz_dtype : torch.dtype Data type for the toeplitz embedding. Probably should be torch.complex64
TYPE:
|
Source code in src/torchlinops/linops/nufft.py
50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 | |
beta
staticmethod
https://sigpy.readthedocs.io/en/latest/_modules/sigpy/fourier.html#nufft
References
Beatty PJ, Nishimura DG, Pauly JM. Rapid gridding reconstruction with a minimal oversampling ratio. IEEE Trans Med Imaging. 2005 Jun;24(6):799-808. doi: 10.1109/TMI.2005.848376. PMID: 15959939.
Source code in src/torchlinops/linops/nufft.py
flatten
Don't combine constituent linops into a chain with other linops Informs how split_forward should behave
prep_locs
staticmethod
prep_locs(
locs: Shaped[Tensor, "... D"],
grid_size: tuple,
padded_size: tuple,
pad_mode: Literal["zero", "circular"] = "circular",
nufft_mode: Literal[
"interpolate", "sampling"
] = "interpolate",
)
| PARAMETER | DESCRIPTION |
|---|---|
locs
|
Input tensor representing locations in the grid. The last dimension corresponds to spatial dimensions. Range is [-N//2, N//2]
TYPE:
|
grid_size
|
The original size of the grid before padding.
TYPE:
|
padded_size
|
The size of the grid after padding.
TYPE:
|
pad_mode
|
The type of padding applied. Can be "zero" for zero-padding or "circular" for circular padding. Default is "circular".
TYPE:
|
nufft_mode
|
The mode of the NUFFT operation. Can be "interpolate" for interpolation or "sampling" for sampling. Default is "interpolate".
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Shaped[Tensor, '... D']
|
Adjusted locations tensor based on the specified padding and NUFFT modes. Range is [0, N_pad]. dtype is floating-point if nufft_mode is "interpolate", and integer if nufft_mode is "sampling" |
| RAISES | DESCRIPTION |
|---|---|
ValueError
|
If an unrecognized |
Examples:
>>> _ = torch.manual_seed(0);
>>> locs = torch.rand(1000, 3) * 64 - 32 # [-32, 32]
>>> locs.min()
tensor(-31.9949)
>>> locs.max()
tensor(31.9896)
>>> grid_size = (64, 64, 64)
>>> padded_size = (80, 80, 80) # oversamp = 1.25
>>> locs_scaled_shifted = NUFFT.prep_locs(locs, grid_size, padded_size)
>>> locs_scaled_shifted.min()
tensor(0.0064)
>>> locs_scaled_shifted.max()
tensor(79.9871)
>>> _ = torch.manual_seed(0);
>>> locs = torch.rand(1000, 3) * 64 - 32 # [-32, 32]
>>> locs = torch.round(locs * 1.25) / 1.25
>>> grid_size = (64, 64, 64)
>>> padded_size = (80, 80, 80) # oversamp = 1.25
>>> locs_scaled_shifted = NUFFT.prep_locs(locs, grid_size, padded_size, nufft_mode='sampling')
>>> locs_scaled_shifted.min()
tensor(0)
>>> locs_scaled_shifted.max()
tensor(79)
Notes
- Assumes that the input
locsare centered. - Adjusts the locations by scaling and shifting them according to the grid and padded sizes.
- Applies clamping or remainder operations based on the padding mode and NUFFT mode.
Source code in src/torchlinops/linops/nufft.py
239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 | |
NamedLinop
Bases: Module
Base class for all named linear operators.
A NamedLinop represents a linear map \(A : X \to Y\) where the input and
output tensor dimensions are identified by name (e.g. ("Nx", "Ny") -> ("Kx", "Ky")).
Subclass this to implement concrete operators. At minimum, override fn
and adj_fn as static methods.
| ATTRIBUTE | DESCRIPTION |
|---|---|
shape |
The named shape of the linop, containing
TYPE:
|
stream |
Optional cuda Stream to run this linop on.
TYPE:
|
start_event |
An event that signals when the linop has started. Useful for synchronizing multiple linops across multiple devices.
TYPE:
|
end_event |
An event that signals when the linop has completed. Useful for synchronizing multiple linops across multiple devices.
TYPE:
|
input_listener |
Pointer to another linop's event attribute. Used to coordinate GPU-to-GPU
transfers in parallel execution contexts. When set to a tuple like
|
Source code in src/torchlinops/linops/namedlinop.py
35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 | |
H
property
Adjoint operator \(A^H\).
By default, creates a new adjoint on each access. Set
torchlinops.config.cache_adjoint_normal = True to enable caching
(deprecated).
N
property
Normal operator \(A^H A\).
Note that the naive normal operator can always be created via A.H @ A.
This function is reserved for custom behavior, as many linops have
optimized normal forms.
By default, creates a new normal on each access. Set
torchlinops.config.cache_adjoint_normal = True to enable caching
(deprecated).
input_listener
property
writable
Pointer to another linop event attribute.
Useful for facilitating gpu-gpu transfers in parallel.
For example, if ToDevice occurs inside a composing linop that allows for parallel execution, e.g.
C = Concat( Chain(ToDevice1, A, ...), Chain(ToDevice2, B, ...), ... )
Then we may want to set ToDevice1 and ToDevice2 to both listen for the beginning of C. That way, both device movements can be triggered in parallel.
This attribute is a universal attribute so that it can be chained in cases of nesting, e.g. Add( Concat( Chain(ToDevice, ...), ... ... ) ) The innermost ToDevice can listens to Chain, which listens to Concat, which listens to Add. This is good because Concat and Add both can parallelize efficiently across multiple GPUs.
__copy__
Specialized copying for linops.
Notes
- Shares previous data
- Removes references to adjoint and normal
- Creates a new shape object, rather than using the old one
Source code in src/torchlinops/linops/namedlinop.py
__init__
| PARAMETER | DESCRIPTION |
|---|---|
shape
|
The shape of this linop, e.g.
TYPE:
|
name
|
Optional name to display for this linop.
TYPE:
|
Source code in src/torchlinops/linops/namedlinop.py
adj_fn
staticmethod
Compute the adjoint operation \(y = A^H(x)\).
Override this in subclasses to define the linop's adjoint behavior.
| PARAMETER | DESCRIPTION |
|---|---|
linop
|
The linop instance.
TYPE:
|
x
|
Input tensor.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Tensor
|
Result of applying the adjoint \(A^H\) to x. |
Source code in src/torchlinops/linops/namedlinop.py
adj_split
staticmethod
Split the adjoint of this linop for a given tile.
Constructs the adjoint, splits it according to tile, and returns the adjoint of the split.
| PARAMETER | DESCRIPTION |
|---|---|
linop
|
The linop whose adjoint should be split.
TYPE:
|
tile
|
Dictionary mapping dimension names to slices.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
NamedLinop
|
The split adjoint sub-linop. |
Source code in src/torchlinops/linops/namedlinop.py
adjoint
Create the adjoint operator \(A^H\).
The default implementation shallow-copies this linop, swaps fn and
adj_fn, and flips the shape. Override this in subclasses that need
special adjoint construction (e.g. conjugating weights).
| RETURNS | DESCRIPTION |
|---|---|
NamedLinop
|
The adjoint operator, sharing the same underlying data. |
Source code in src/torchlinops/linops/namedlinop.py
apply
compose
Compose this linop with another linop.
| PARAMETER | DESCRIPTION |
|---|---|
inner
|
The linop to call before this one.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
NamedLinop
|
The composition of self and inner. If A = self and B = inner then this returns C = AB. |
Source code in src/torchlinops/linops/namedlinop.py
flatten
fn
staticmethod
Compute the forward operation \(y = A(x)\).
Override this in subclasses to define the linop's forward behavior.
| PARAMETER | DESCRIPTION |
|---|---|
linop
|
The linop instance (passed explicitly because this is a staticmethod).
TYPE:
|
x
|
Input tensor.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Tensor
|
Result of applying the linop to x. |
Notes
Declared as a staticmethod so that adjoint() can swap fn and
adj_fn on a shallow copy without bound-method complications.
Source code in src/torchlinops/linops/namedlinop.py
forward
Apply the forward operation \(y = A(x)\).
If a CUDA stream is assigned, execution is dispatched to that stream.
If a start_event is set, it is recorded before execution begins,
allowing other operators to synchronize on it.
Do not override this method. Instead, override .fn() and .adj_fn().
| PARAMETER | DESCRIPTION |
|---|---|
x
|
Input tensor.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Tensor
|
The result of applying this linop to x. |
Source code in src/torchlinops/linops/namedlinop.py
normal
Create the normal operator \(A^H A\), optionally with an inner operator.
When inner is None (or Identity with the reduce-identity config
enabled), creates a linop whose forward pass calls normal_fn.
When inner is provided, constructs the composition \(A^H \cdot \text{inner} \cdot A\), which is used for Toeplitz embedding and similar optimizations.
| PARAMETER | DESCRIPTION |
|---|---|
inner
|
An optional inner operator for Toeplitz embedding. If
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
NamedLinop
|
The normal operator. |
Source code in src/torchlinops/linops/namedlinop.py
normal_fn
staticmethod
Compute the normal operation \(y = A^H A(x)\).
The default implementation composes adj_fn(fn(x)). Override this
in subclasses that have an efficient closed-form normal (e.g.
Diagonal, FFT).
| PARAMETER | DESCRIPTION |
|---|---|
linop
|
The linop instance.
TYPE:
|
x
|
Input tensor.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Tensor
|
Result of applying \(A^H A\) to x. |
Source code in src/torchlinops/linops/namedlinop.py
size
split
staticmethod
Split a linop into a sub-linop for a given tile.
Translates a tile dictionary into per-dimension slices and delegates
to split_forward.
| PARAMETER | DESCRIPTION |
|---|---|
linop
|
The linop to split.
TYPE:
|
tile
|
Dictionary mapping dimension names to slices.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
NamedLinop
|
The sub-linop operating on the specified tile. |
Source code in src/torchlinops/linops/namedlinop.py
split_forward
Split this linop into a sub-linop according to slices over its dimensions.
Override this in subclasses to define how the linop decomposes when tiled
along its named dimensions. For the companion method that handles adjoints,
see adj_split.
| PARAMETER | DESCRIPTION |
|---|---|
ibatch
|
Slices over the input dimensions, one per element of |
obatch
|
Slices over the output dimensions, one per element of |
| RETURNS | DESCRIPTION |
|---|---|
NamedLinop
|
A new linop that operates on the specified slice of the data. |
Source code in src/torchlinops/linops/namedlinop.py
to
Move this linop (and its cached adjoint/normal) to device.
| PARAMETER | DESCRIPTION |
|---|---|
device
|
Target device. |
memory_aware
|
If
TYPE:
|
called_by_adjoint
|
Internal flag to prevent infinite recursion when the adjoint
also calls
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
NamedLinop
|
The linop on the target device. |
Source code in src/torchlinops/linops/namedlinop.py
Pad
Bases: NamedLinop
Zero Pad the last dimensions of the input volume Padding is centered: - TODO: support non-centered padding? ishape: [B... Nx Ny [Nz]] oshape: [B... Nx1 Ny1 [Nz1]]
Source code in src/torchlinops/linops/pad_last.py
13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 | |
__init__
__init__(
pad_im_size: tuple[int, ...],
im_size: tuple[int, ...],
in_shape: Optional[Shape] = None,
out_shape: Optional[Shape] = None,
batch_shape: Optional[Shape] = None,
)
| PARAMETER | DESCRIPTION |
|---|---|
pad_im_size
|
Target (padded) size for the last dimensions. |
im_size
|
Original (unpadded) size for the last dimensions. |
in_shape
|
Named shape for the input spatial dimensions.
TYPE:
|
out_shape
|
Named shape for the output spatial dimensions.
TYPE:
|
batch_shape
|
Named shape for batch dimensions.
TYPE:
|
Source code in src/torchlinops/linops/pad_last.py
adj_fn
staticmethod
Crop the last n dimensions of y
Source code in src/torchlinops/linops/pad_last.py
PadDim
Bases: NamedLinop
Zero-padding operator along a specified dimension.
Pads the input with zeros. The adjoint truncates (slices) back to the original size.
Source code in src/torchlinops/linops/trunc_pad.py
normal
Diagonal in all dims except the last one
Source code in src/torchlinops/linops/trunc_pad.py
Rearrange
Bases: NamedLinop
Dimension rearrangement via einops.rearrange.
Wraps einops.rearrange as a named linear operator. The adjoint
performs the inverse rearrangement.
Source code in src/torchlinops/linops/einops.py
__init__
__init__(
ipattern,
opattern,
ishape: Shape,
oshape: Shape,
axes_lengths: Optional[Mapping] = None,
)
| PARAMETER | DESCRIPTION |
|---|---|
ipattern
|
Input pattern string for
TYPE:
|
opattern
|
Output pattern string for
TYPE:
|
ishape
|
Named input shape specification.
TYPE:
|
oshape
|
Named output shape specification.
TYPE:
|
axes_lengths
|
Mapping from axis names to their sizes, passed as keyword
arguments to
TYPE:
|
Source code in src/torchlinops/linops/einops.py
size
split_forward
TODO: Add compound shapes so splitting through rearrange can work.
Source code in src/torchlinops/linops/einops.py
Repeat
Bases: NamedLinop
Repeat (expand) operator along specified dimensions (adjoint of SumReduce).
Wraps einops.repeat as a named linear operator.
Source code in src/torchlinops/linops/einops.py
198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 | |
__init__
__init__(
n_repeats: Mapping,
ishape: Shape,
oshape: Shape,
broadcast_dims: Optional[list] = None,
)
| PARAMETER | DESCRIPTION |
|---|---|
n_repeats
|
Mapping from dimension names to the number of repetitions along each new dimension.
TYPE:
|
ishape
|
Named input shape specification.
TYPE:
|
oshape
|
Named output shape specification. Must have more dimensions
than
TYPE:
|
broadcast_dims
|
Dimensions that are broadcast (size unknown until runtime) rather than having a fixed repeat count.
TYPE:
|
Source code in src/torchlinops/linops/einops.py
split_forward
Repeat fewer times, depending on the size of obatch
Source code in src/torchlinops/linops/einops.py
RepeatedEvent
Manage a FIFO queue of CUDA events for stream synchronization.
.. deprecated:: This class is deprecated and will be removed in version 0.7.0. The functionality is no longer used internally.
Keeps only the most recent event, dropping old references to free resources. The wrapper itself can be passed directly to wait_event().
Source code in src/torchlinops/utils/_event.py
record
Create a new CUDA event and record it on the given stream. Old events are dropped immediately to free resources.
Source code in src/torchlinops/utils/_event.py
Sampling
Bases: NamedLinop
Sample a tensor at some specified integer locations.
Source code in src/torchlinops/linops/sampling.py
17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 | |
__init__
__init__(
idx: tuple[Integer[Tensor, ...], ...],
input_size: tuple[int, ...],
output_shape: Optional[Shape] = None,
input_shape: Optional[Shape] = None,
batch_shape: Optional[Shape] = None,
)
| PARAMETER | DESCRIPTION |
|---|---|
idx
|
tuple of of D tensors, each of shape [M...]
One index for each "sampled" axis of the input tensor
Use |
input_size
|
Actual shape of the input interpolated tensor, without the batch dimensions. |
output_shape
|
Named dimensions for the output.
TYPE:
|
input_shape
|
Named dimensions for the input.
TYPE:
|
batch_shape
|
Named batch dimensions.
TYPE:
|
Notes
Sampling: (B..., N...) -> (B..., M...)
Source code in src/torchlinops/linops/sampling.py
from_mask
classmethod
from_stacked_idx
classmethod
Alternative constructor for index in [M... D] form
Scalar
Bases: Diagonal
Scalar multiplication operator \(S(x) = \alpha x\).
A special case of Diagonal where the weight is a scalar, making it
trivially splittable (the same scalar applies to every tile).
Source code in src/torchlinops/linops/scalar.py
__init__
| PARAMETER | DESCRIPTION |
|---|---|
weight
|
The scalar multiplier \(\alpha\).
TYPE:
|
ioshape
|
Named dimensions (same for input and output).
TYPE:
|
Source code in src/torchlinops/linops/scalar.py
ShapeSpec
Bases: Identity
Identity operator that renames dimensions.
Functionally identical to Identity but maps from one set of named
dimensions to another, acting as a shape adapter between linops.
Source code in src/torchlinops/linops/identity.py
Stack
Bases: Threadable, NamedLinop
Concatenate some linops along a new dimension.
Linops need not output tensors of the same size, but they should output tensors of the same number of dimensions.
Stacking type depends on dimensions provided:
Horizontal stacking (stacking along an input dimension)::
A B C
Vertical stacking (stacking along an output dimension)::
A
B
C
Diagonal stacking (stacking along separate input and output dimensions)::
A . .
. B .
. . C
Inherits from Threadable to support parallel execution of sub-linops.
When threaded=True (default), each sub-linop is executed in parallel
using a ThreadPoolExecutor.
Note that shared linops (e.g., Stack(A, A, odim_and_idx=("L", 0))) are
automatically shallow-copied to ensure independent identity for threading,
while still sharing tensor data. See Threadable for details.
| ATTRIBUTE | DESCRIPTION |
|---|---|
linops |
The list of linops being stacked.
TYPE:
|
threaded |
Whether to run sub-linops in parallel. Default is True.
TYPE:
|
num_workers |
Number of worker threads. If None, defaults to the number of sub-linops.
TYPE:
|
idim |
Input stacking dimension name.
TYPE:
|
idim_idx |
Index position of the input stacking dimension.
TYPE:
|
odim |
Output stacking dimension name.
TYPE:
|
odim_idx |
Index position of the output stacking dimension.
TYPE:
|
Source code in src/torchlinops/linops/stack.py
31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 | |
__init__
__init__(
*linops: NamedLinop,
idim_and_idx: tuple[
Optional[NamedDimension | str], Optional[int]
] = (None, None),
odim_and_idx: tuple[
Optional[NamedDimension | str], Optional[int]
] = (None, None),
**kwargs,
)
| PARAMETER | DESCRIPTION |
|---|---|
*linops
|
The linops to stack.
TYPE:
|
idim_and_idx
|
Tuple of
TYPE:
|
odim_and_idx
|
Tuple of
TYPE:
|
Source code in src/torchlinops/linops/stack.py
spinoff
Helper function for creating a new linop using the provided inputs.
Preserves settings from the original linop.
| PARAMETER | DESCRIPTION |
|---|---|
linops
|
The linops for the new instance. Defaults to self.linops.
TYPE:
|
shape
|
The shape for the new instance. If None, computed from linops and idim/odim.
TYPE:
|
idim_and_idx
|
Tuple of (dim_name, index) for input stacking dimension. Defaults to (self.idim, self.idim_idx).
TYPE:
|
odim_and_idx
|
Tuple of (dim_name, index) for output stacking dimension. Defaults to (self.odim, self.odim_idx).
TYPE:
|
Source code in src/torchlinops/linops/stack.py
split_data
Split stack linop, making a new stack linop if necessary
| PARAMETER | DESCRIPTION |
|---|---|
data_list
|
List of data for each linop in this stack linop
TYPE:
|
Source code in src/torchlinops/linops/stack.py
split_forward
Split stack linop
Source code in src/torchlinops/linops/stack.py
SumReduce
Bases: NamedLinop
Sum-reduction operator (adjoint of Repeat).
Wraps einops.reduce with 'sum' reduction. Reduces (sums over)
specified dimensions.
Source code in src/torchlinops/linops/einops.py
102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 | |
__init__
| PARAMETER | DESCRIPTION |
|---|---|
ishape
|
Input shape spec, einops style.
TYPE:
|
oshape
|
Output shape spec, einops style.
TYPE:
|
Source code in src/torchlinops/linops/einops.py
Threadable
Mixin to enable parallel execution of sub-linops using Python threads.
When threaded=True, the linop's fn and adj_fn methods will run
each sub-linop in parallel using a ThreadPoolExecutor. This is useful when
sub-linops are I/O bound or release the GIL (e.g., PyTorch tensor operations).
The mixin manages sub-linops through the linops property, which automatically
creates shallow copies of each linop when assigned. This ensures that shared
linops (e.g., Add(A, A)) have independent identities for threading while
still sharing tensor data.
| ATTRIBUTE | DESCRIPTION |
|---|---|
linops |
The list of linops to run in parallel. Setting this property triggers automatic shallow copying and input listener setup.
TYPE:
|
threaded |
Whether to run sub-linops in parallel. Default is True.
TYPE:
|
num_workers |
Number of worker threads. If None, defaults to the number of sub-linops.
TYPE:
|
settings |
Dictionary with
TYPE:
|
Source code in src/torchlinops/linops/threadable.py
13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 | |
linops
property
writable
The list of sub-linops managed by this Threadable.
This is a property rather than a direct attribute to intercept assignment and perform automatic housekeeping whenever linops are set. The setter creates shallow copies of each linop (preserving tensor data sharing) and sets up input listeners for event coordination.
| RETURNS | DESCRIPTION |
|---|---|
ModuleList
|
The list of sub-linops. |
settings
property
writable
Get threading settings as a dictionary.
| RETURNS | DESCRIPTION |
|---|---|
dict
|
Dictionary with |
__init__
| PARAMETER | DESCRIPTION |
|---|---|
threaded
|
Whether to run sub-linops in parallel. Default is True.
TYPE:
|
num_workers
|
Number of worker threads. If None, defaults to the number of
sub-linops when
TYPE:
|
linops
|
The list of linops to run in parallel. If assigned via the
TYPE:
|
Source code in src/torchlinops/linops/threadable.py
__setattr__
Set attribute, with special handling for linops.
PyTorch's nn.Module.__setattr__ intercepts attribute assignment and
performs special handling for modules, parameters, and buffers. This
override ensures that linops assignment goes through the property
descriptor rather than being intercepted by PyTorch's logic.
| PARAMETER | DESCRIPTION |
|---|---|
name
|
Attribute name.
TYPE:
|
value
|
Attribute value.
TYPE:
|
Source code in src/torchlinops/linops/threadable.py
threaded_apply
Wrapper around _threaded_apply
Source code in src/torchlinops/linops/threadable.py
threaded_apply_sum_reduce
Wrapper around _threaded_apply_sum_reduce.
Source code in src/torchlinops/linops/threadable.py
ToDevice
Bases: NamedLinop
Transfer tensors between devices as a named linear operator.
The forward operation moves a tensor from idevice to odevice.
The adjoint reverses the direction. The normal \(T^H T\) is the identity
(device round-trip is lossless).
For CUDA-to-CUDA transfers, streams and events are used for asynchronous pipelined execution.
| ATTRIBUTE | DESCRIPTION |
|---|---|
ispec |
Source (input) device specification containing device and stream info.
TYPE:
|
ospec |
Target (output) device specification containing device and stream info.
TYPE:
|
is_gpu2gpu |
True if both source and target devices are CUDA devices.
TYPE:
|
Source code in src/torchlinops/linops/device.py
114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 | |
__init__
__init__(
idevice: DeviceSpec | device | None,
odevice: DeviceSpec | device | None,
ioshape: Optional[Shape] = None,
)
| PARAMETER | DESCRIPTION |
|---|---|
idevice
|
Source (input) device specification.
TYPE:
|
odevice
|
Target (output) device specification.
TYPE:
|
ioshape
|
Named dimensions (same for input and output since this is diagonal).
TYPE:
|
Source code in src/torchlinops/linops/device.py
__repr__
Helps prevent recursion error caused by .H and .N
Source code in src/torchlinops/linops/device.py
Truncate
Bases: NamedLinop
Truncation (slicing) operator along the last dimension.
Extracts a contiguous slice from the input. The adjoint zero-pads back to the original size.
Source code in src/torchlinops/linops/trunc_pad.py
normal
Diagonal in all dims except the last one
Source code in src/torchlinops/linops/trunc_pad.py
Zero
Bases: NamedLinop
Zero operator \(0(x) = 0\).
Always returns a zero tensor with the same shape as the input.
Source code in src/torchlinops/linops/identity.py
Crop
clear_transfer_streams_registry
Clear the transfer streams registry.
This is useful for testing to ensure a clean state between tests. The registry caches CUDA streams to enable reuse across transfers.
Source code in src/torchlinops/linops/device.py
create_batched_linop
create_batched_linop(
linop,
batch_specs: BatchSpec | list[BatchSpec],
default_device: device = None,
_mmap=None,
)
Split and distribute a linop across devices according to batch specs.
Recursively processes a list of BatchSpec objects: the first spec
splits the linop into tiles, optionally places each tile on a target
device, then passes remaining specs to each tile recursively. Tiles are
reassembled via Concat (for partitioned dimensions) or Add (for
reduced dimensions).
| PARAMETER | DESCRIPTION |
|---|---|
linop
|
The operator to split and distribute.
TYPE:
|
batch_specs
|
One or more batch specifications to apply (processed in order). |
_mmap
|
Internal memory map for efficient device transfers. Created automatically on the first call. Probably don't set this manually.
TYPE:
|
_default_device
|
The default device to use if no device info is provided in the batch spec.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
NamedLinop
|
A composite linop (tree of |
Source code in src/torchlinops/linops/split.py
77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 | |
default_to
Get the first non-None value, right to left order.
Most "default" value goes first.
Source code in src/torchlinops/utils/_defaults.py
get_nd_shape
Return spatial dimension names for a given image size.
Maps a 1-D, 2-D, or 3-D image size tuple to the corresponding named
dimension tuple (e.g. ('Nx', 'Ny') or ('Kx', 'Ky')).
| PARAMETER | DESCRIPTION |
|---|---|
im_size
|
Image size tuple whose length (1, 2, or 3) determines the spatial dimensionality.
TYPE:
|
kspace
|
If
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
tuple of str
|
Named dimension strings for each spatial axis. |
| RAISES | DESCRIPTION |
|---|---|
ValueError
|
If |
Source code in src/torchlinops/nameddim/_shapes.py
isequal
isequal(
shape1: Sequence,
shape2: Sequence,
return_assignments: bool = False,
) -> bool | tuple[bool, Optional[dict[int, list]]]
Test if two sequences with ellipses are length-compatible and value-compatible.
Implemented with bottom-up DP
| PARAMETER | DESCRIPTION |
|---|---|
shape1
|
The sequences of tokens to compare.
TYPE:
|
shape2
|
The sequences of tokens to compare.
TYPE:
|
ELLIPSES
|
The wildcard that can match any number of tokens.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
bool
|
Whether shape1 and shape2 are compatible. |
Examples:
>>> isequal(("A", "B"), ("A", "B"))
True
>>> isequal(("A", "C"), ("A",))
False
>>> isequal(("A", "C"), tuple())
False
>>> isequal(("A", "C"), ("...",))
True
>>> isequal(("A", "C", "..."), ("...",))
True
>>> isequal(("A", "B", "C"), ("A", "...", "C"))
True
>>> isequal(("...", "A", "C", "..."), ("...",))
True
>>> isequal(("...", "A", "C"), ("B", "C"))
False
Wildcards
Think about this one...
Source code in src/torchlinops/nameddim/_matching.py
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 | |
split_linop
Split a linop into an nd-array of sub-linops according to batch sizes.
| PARAMETER | DESCRIPTION |
|---|---|
linop
|
The linop to be split.
TYPE:
|
batch_sizes
|
Dictionary mapping dimension names to chunk sizes.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
linops
|
Array of sub-linops with shape determined by the number of tiles per dimension.
TYPE:
|
input_batches
|
Corresponding input slices for each tile.
TYPE:
|
output_batches
|
Corresponding output slices for each tile.
TYPE:
|