Matrix calculus is the notation and rule system used to differentiate functions whose inputs, outputs, or intermediate values are vectors, matrices, or tensors. Automatic...
| Section | Title |
|---|---|
| 1 | Chapter 10. Matrix and Tensor Differentiation |
| 2 | Tensor Operations |
| 3 | Broadcasting Semantics |
| 4 | Linear Algebra Primitives |
| 5 | Differentiating Factorizations |
| 6 | Eigenvalue Problems |
| 7 | Singular Value Decomposition |
| 8 | Sparse Tensor Derivatives |
| 9 | GPU Tensor Kernels |
Chapter 10. Matrix and Tensor DifferentiationMatrix calculus is the notation and rule system used to differentiate functions whose inputs, outputs, or intermediate values are vectors, matrices, or tensors. Automatic...
Tensor OperationsTensor operations generalize scalar, vector, and matrix operations to arrays with arbitrary rank. In automatic differentiation, a tensor is usually treated as a typed array...
Broadcasting SemanticsBroadcasting is the rule system that allows tensor operations between arrays of different shapes without explicitly materializing expanded copies. It is one of the most...
Linear Algebra PrimitivesLinear algebra primitives are tensor operations with algebraic structure: matrix multiplication, triangular solves, factorizations, inverses, determinants, norms, and spectral...
Differentiating FactorizationsMatrix factorizations rewrite a matrix into structured factors. They are used because the factors make later computations cheaper, more stable, or easier to interpret. In...
Eigenvalue ProblemsEigenvalue problems are fundamental in numerical analysis, optimization, physics, graph methods, control theory, and machine learning. They are also among the most subtle...
Singular Value DecompositionThe singular value decomposition SVD is one of the most important matrix factorizations in numerical linear algebra. It appears in dimensionality reduction, least squares,...
Sparse Tensor DerivativesMost real computational problems are sparse. Large matrices and tensors often contain mostly zeros, structured blocks, or local interactions. Sparse representations reduce...
GPU Tensor KernelsModern automatic differentiation systems are fundamentally tensor compiler systems. Their performance depends less on mathematical differentiation rules than on how...