This appendix collects frequently used identities from linear algebra, matrix algebra, vector calculus, and numerical computation. The goal is reference rather than proof. Most formulas are proved earlier in the text.
J.1 Algebraic Identities
Difference of Squares
a2−b2=(a−b)(a+b).
Binomial Expansion
(x+y)n=k=0∑n(kn)xn−kyk.
Geometric Series
For x=1,
1+x+x2+⋯+xn=x−1xn+1−1.
If
∣x∣<1,
then the infinite series converges:
k=0∑∞xk=1−x1.
Quadratic Formula
For
ax2+bx+c=0,a=0,
the solutions are
x=2a−b±b2−4ac.
J.2 Complex Number Identities
For
z=a+bi,
the conjugate is
z=a−bi.
Modulus
∣z∣=a2+b2.
Product with Conjugate
zz=∣z∣2.
Reciprocal
If z=0,
z−1=∣z∣2z.
Euler Formula
eiθ=cosθ+isinθ.
Polar Multiplication
(reiθ)(seiϕ)=rsei(θ+ϕ).
J.3 Vector Identities
Dot Product
For
x,y∈Rn,x⋅y=xTy=i=1∑nxiyi.
Euclidean Norm
∥x∥2=xTx.
Distance Formula
d(x,y)=∥x−y∥.
Cauchy-Schwarz Inequality
∣⟨x,y⟩∣≤∥x∥∥y∥.
Triangle Inequality
∥x+y∥≤∥x∥+∥y∥.
Parallelogram Identity
∥x+y∥2+∥x−y∥2=2∥x∥2+2∥y∥2.
J.4 Matrix Addition and Multiplication
Matrix Addition
If A,B∈Fm×n,
(A+B)ij=aij+bij.
Matrix Multiplication
If
A∈Fm×n,B∈Fn×p,
then
(AB)ij=k=1∑naikbkj.
Associativity
A(BC)=(AB)C.
Distributivity
A(B+C)=AB+AC.(A+B)C=AC+BC.
Scalar Compatibility
(cA)B=A(cB)=c(AB).
Noncommutativity
In general,
AB=BA.
J.5 Transpose Identities
Transpose of Sum
(A+B)T=AT+BT.
Transpose of Product
(AB)T=BTAT.
Double Transpose
(AT)T=A.
Inverse of Transpose
(AT)−1=(A−1)T.
when A is invertible.
J.6 Conjugate Transpose Identities
Conjugate Transpose of Product
(AB)∗=B∗A∗.
Double Conjugate Transpose
(A∗)∗=A.
Inverse Relation
(A∗)−1=(A−1)∗.
for invertible A.
J.7 Determinant Identities
Determinant of Product
det(AB)=det(A)det(B).
Determinant of Transpose
det(AT)=det(A).
Determinant of Inverse
det(A−1)=det(A)1.
Determinant of Triangular Matrix
For triangular A,
det(A)=i=1∏naii.
Invertibility Criterion
A invertible⟺det(A)=0.
J.8 Trace Identities
Definition
tr(A)=i=1∑naii.
Linearity
tr(A+B)=tr(A)+tr(B).
Scalar Multiplication
tr(cA)=ctr(A).
Cyclic Property
tr(AB)=tr(BA).
More generally,
tr(ABC)=tr(BCA)=tr(CAB).
J.9 Inverse Identities
Inverse of Product
(AB)−1=B−1A−1.
Identity Inverse
I−1=I.
Inverse of Diagonal Matrix
If
D=diag(d1,…,dn),
with all di=0, then
D−1=diag(d11,…,dn1).
J.10 Rank Identities
Rank Bound
If
A∈Fm×n,
then
rank(A)≤min(m,n).
Rank-Nullity Theorem
For a linear map
T:V→W,dim(V)=rank(T)+nullity(T).
Rank of Product
rank(AB)≤min(rank(A),rank(B)).
J.11 Orthogonality Identities
Orthogonal Matrix
QTQ=I.
Unitary Matrix
U∗U=I.
Norm Preservation
If Q is orthogonal,
∥Qx∥2=∥x∥2.
Orthogonal Projection
If P is an orthogonal projection,
P2=P,PT=P.
J.12 Eigenvalue Identities
Eigenvalue Equation
Av=λv.
Characteristic Polynomial
pA(λ)=det(λI−A).
Sum of Eigenvalues
The sum of eigenvalues equals the trace:
i∑λi=tr(A).
Product of Eigenvalues
The product of eigenvalues equals the determinant:
i∏λi=det(A).
Similarity Invariance
If
B=P−1AP,
then A and B have the same eigenvalues.
J.13 Diagonalization Identities
If
A=PDP−1,
then
Ak=PDkP−1.
If
D=diag(λ1,…,λn),
then
Dk=diag(λ1k,…,λnk).
J.14 Singular Value Decomposition
If
A=UΣV∗,
then:
Property
Formula
U unitary
U∗U=I
V unitary
V∗V=I
Singular values
Diagonal entries of Σ
Eigenvalues of A∗A
σi2
Frobenius Norm from Singular Values
∥A∥F2=i∑σi2.
Spectral Norm
∥A∥2=σmax(A).
J.15 Least Squares Formulas
For the least squares problem
xmin∥Ax−b∥22,
the normal equations are
ATAx=ATb.
If the columns of A are linearly independent, then
x=(ATA)−1ATb.
Projection Matrix
The orthogonal projection onto the column space of A is
P=A(ATA)−1AT.
J.16 Calculus Identities
Derivative of Power
dxdxn=nxn−1.
Product Rule
(fg)′=f′g+fg′.
Chain Rule
(f∘g)′=(f′∘g)g′.
Gradient of Quadratic Form
If
f(x)=xTAx,
then
∇f(x)=(A+AT)x.
If A is symmetric,
∇f(x)=2Ax.
Hessian of Quadratic Form
If A is symmetric,
∇2(xTAx)=2A.
J.17 Matrix Calculus Identities
Derivative of Linear Form
∇x(cTx)=c.
Derivative of Norm Squared
∇x∥x∥22=2x.
Derivative of Least Squares Objective
If
f(x)=∥Ax−b∥22,
then
∇f(x)=2AT(Ax−b).
J.18 Numerical Computation Identities
Residual
For approximate solution x,
r=b−Ax.
Relative Error
∥x∥∥x−x∥.
Condition Number
κ(A)=∥A∥∥A−1∥.
Floating-Point Model
fl(a∘b)=(a∘b)(1+δ),∣δ∣≤u.
J.19 Probability and Statistics Identities
Mean
For data points x1,…,xn,
μ=n1i=1∑nxi.
Variance
Var(x)=n1i=1∑n(xi−μ)2.
Covariance Matrix
For centered vectors xi,
C=n1i=1∑nxixiT.
Covariance matrices are symmetric and positive semidefinite.
J.20 Fourier and Orthogonality Identities
Fourier Coefficient
ck=⟨f,ϕk⟩.
Orthogonality Relation
⟨ϕi,ϕj⟩=0,i=j.
Parseval Identity
∥f∥2=k∑∣ck∣2.
J.21 Common Matrix Factorizations
Factorization
Form
LU decomposition
A=LU
QR decomposition
A=QR
Cholesky decomposition
A=LLT
Eigenvalue decomposition
A=PDP−1
Singular value decomposition
A=UΣV∗
Schur decomposition
A=QTQ∗
J.22 Summary
The identities in this appendix appear repeatedly throughout linear algebra, numerical computation, optimization, statistics, and applied mathematics.
Several themes recur:
Theme
Representative identity
Structure preservation
(AB)T=BTAT
Geometry
⟨x,y⟩=xTy
Invertibility
det(A)=0⟺A−1 exists
Orthogonality
QTQ=I
Spectral structure
Av=λv
Optimization
ATAx=ATb
Numerical analysis
κ(A)=∥A∥∥A−1∥
These formulas form the computational and theoretical vocabulary of linear algebra.
← → section · ↑ ↓ slide · Space next · F fullscreen · Esc exit