Skip to content

Chapter 10. Matrix and Tensor Differentiation

Matrix calculus is the notation and rule system used to differentiate functions whose inputs, outputs, or intermediate values are vectors, matrices, or tensors. Automatic...

SectionTitle
1Chapter 10. Matrix and Tensor Differentiation
2Tensor Operations
3Broadcasting Semantics
4Linear Algebra Primitives
5Differentiating Factorizations
6Eigenvalue Problems
7Singular Value Decomposition
8Sparse Tensor Derivatives
9GPU Tensor Kernels
Chapter 10. Matrix and Tensor DifferentiationMatrix calculus is the notation and rule system used to differentiate functions whose inputs, outputs, or intermediate values are vectors, matrices, or tensors. Automatic...
9 min
Tensor OperationsTensor operations generalize scalar, vector, and matrix operations to arrays with arbitrary rank. In automatic differentiation, a tensor is usually treated as a typed array...
8 min
Broadcasting SemanticsBroadcasting is the rule system that allows tensor operations between arrays of different shapes without explicitly materializing expanded copies. It is one of the most...
8 min
Linear Algebra PrimitivesLinear algebra primitives are tensor operations with algebraic structure: matrix multiplication, triangular solves, factorizations, inverses, determinants, norms, and spectral...
9 min
Differentiating FactorizationsMatrix factorizations rewrite a matrix into structured factors. They are used because the factors make later computations cheaper, more stable, or easier to interpret. In...
9 min
Eigenvalue ProblemsEigenvalue problems are fundamental in numerical analysis, optimization, physics, graph methods, control theory, and machine learning. They are also among the most subtle...
7 min
Singular Value DecompositionThe singular value decomposition SVD is one of the most important matrix factorizations in numerical linear algebra. It appears in dimensionality reduction, least squares,...
7 min
Sparse Tensor DerivativesMost real computational problems are sparse. Large matrices and tensors often contain mostly zeros, structured blocks, or local interactions. Sparse representations reduce...
8 min
GPU Tensor KernelsModern automatic differentiation systems are fundamentally tensor compiler systems. Their performance depends less on mathematical differentiation rules than on how...
8 min