# Chapter 65. Symmetric Matrices

# Chapter 65. Symmetric Matrices

A symmetric matrix is a square matrix equal to its transpose.

If \(A\) is symmetric, then

$$
A^T=A.
$$

Equivalently, the entries of \(A\) satisfy

$$
a_{ij}=a_{ji}
$$

for every pair of indices \(i\) and \(j\). Thus the matrix is mirrored across its main diagonal.

Symmetric matrices are central because they behave like real self-adjoint operators. Their eigenvalues are real, eigenvectors from distinct eigenvalues are orthogonal, and they can be diagonalized by an orthogonal matrix. These properties make them one of the best-behaved classes of matrices in linear algebra.

## 65.1 Definition

Let \(A\) be an \(n\times n\) real matrix. The matrix \(A\) is symmetric if

$$
A^T=A.
$$

In entries, this means

$$
a_{ij}=a_{ji}.
$$

For example,

$$
A=
\begin{bmatrix}
2 & -1 & 4 \\
-1 & 3 & 0 \\
4 & 0 & 5
\end{bmatrix}
$$

is symmetric, since each entry above the diagonal matches the corresponding entry below the diagonal.

The matrix

$$
B=
\begin{bmatrix}
2 & -1 & 4 \\
7 & 3 & 0 \\
4 & 6 & 5
\end{bmatrix}
$$

is not symmetric, because

$$
b_{12}=-1
$$

but

$$
b_{21}=7.
$$

Only square matrices can be symmetric, since the equation \(A^T=A\) requires \(A\) and \(A^T\) to have the same size.

## 65.2 Structure of a Symmetric Matrix

A symmetric matrix has free entries on and above the diagonal. The entries below the diagonal are then determined.

For a \(3\times 3\) matrix,

$$
A=
\begin{bmatrix}
a & b & c \\
b & d & e \\
c & e & f
\end{bmatrix}.
$$

There are six independent entries, not nine.

In general, an \(n\times n\) symmetric matrix has

$$
\frac{n(n+1)}{2}
$$

independent entries. These consist of \(n\) diagonal entries and

$$
\frac{n(n-1)}{2}
$$

entries above the diagonal.

## 65.3 Basic Examples

Every diagonal matrix is symmetric. If

$$
D=
\begin{bmatrix}
d_1 & 0 & \cdots & 0 \\
0 & d_2 & \cdots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & \cdots & d_n
\end{bmatrix},
$$

then

$$
D^T=D.
$$

The identity matrix is symmetric:

$$
I^T=I.
$$

The zero matrix is symmetric:

$$
0^T=0.
$$

If \(u\in\mathbb{R}^n\), then the outer product

$$
uu^T
$$

is symmetric, because

$$
(uu^T)^T=uu^T.
$$

For example, if

$$
u=
\begin{bmatrix}
1 \\
2 \\
3
\end{bmatrix},
$$

then

$$
uu^T=
\begin{bmatrix}
1 & 2 & 3 \\
2 & 4 & 6 \\
3 & 6 & 9
\end{bmatrix}.
$$

## 65.4 Symmetric and Skew-Symmetric Parts

Every square matrix can be decomposed into a symmetric part and a skew-symmetric part.

Let \(A\) be any square matrix. Define

$$
S=\frac{1}{2}(A+A^T)
$$

and

$$
K=\frac{1}{2}(A-A^T).
$$

Then

$$
S^T=S
$$

and

$$
K^T=-K.
$$

Also,

$$
A=S+K.
$$

Thus

$$
A=\frac{1}{2}(A+A^T)+\frac{1}{2}(A-A^T).
$$

The symmetric part controls quadratic expressions such as \(x^TAx\). The skew-symmetric part contributes nothing to such expressions over the real numbers, because

$$
x^TKx=0
$$

whenever

$$
K^T=-K.
$$

## 65.5 Symmetric Matrices and Inner Products

A symmetric matrix satisfies a compatibility identity with the Euclidean inner product:

$$
(Ax)\cdot y=x\cdot(Ay)
$$

for all vectors \(x,y\in\mathbb{R}^n\).

Proof:

$$
(Ax)\cdot y =
(Ax)^Ty.
$$

Since

$$
(Ax)^T=x^TA^T,
$$

we have

$$
(Ax)\cdot y=x^TA^Ty.
$$

If \(A\) is symmetric, then \(A^T=A\). Hence

$$
(Ax)\cdot y=x^TAy=x\cdot(Ay).
$$

This identity is the matrix form of self-adjointness. It is the reason symmetric matrices have real eigenvalues and orthogonal eigenspaces.

## 65.6 Real Eigenvalues

Every real symmetric matrix has real eigenvalues.

To see the idea, suppose

$$
Av=\lambda v
$$

for a nonzero complex vector \(v\). Use the Hermitian inner product and write \(v^*v>0\). Since \(A\) is real symmetric, it is also Hermitian when viewed as a complex matrix.

Then

$$
v^*Av=\lambda v^*v.
$$

But \(v^*Av\) is real for a Hermitian matrix. Since \(v^*v\) is real and positive, \(\lambda\) must be real.

Thus symmetric matrices do not produce nonreal eigenvalues. This sharply contrasts with general real matrices, such as rotation matrices.

For example,

$$
R=
\begin{bmatrix}
0 & -1 \\
1 & 0
\end{bmatrix}
$$

has eigenvalues \(i\) and \(-i\). The matrix \(R\) is not symmetric.

## 65.7 Orthogonality of Eigenvectors

Eigenvectors of a symmetric matrix corresponding to distinct eigenvalues are orthogonal.

Let

$$
Av=\lambda v
$$

and

$$
Aw=\mu w,
$$

where

$$
\lambda\neq\mu.
$$

Using the identity from the previous section,

$$
(Av)\cdot w=v\cdot(Aw).
$$

Substitute the eigenvalue equations:

$$
(\lambda v)\cdot w=v\cdot(\mu w).
$$

Therefore

$$
\lambda(v\cdot w)=\mu(v\cdot w).
$$

So

$$
(\lambda-\mu)(v\cdot w)=0.
$$

Since

$$
\lambda\neq\mu,
$$

we get

$$
v\cdot w=0.
$$

Thus \(v\) and \(w\) are orthogonal.

## 65.8 Orthogonal Diagonalization

A real symmetric matrix can be orthogonally diagonalized.

This means that if \(A=A^T\), then there exists an orthogonal matrix \(Q\) and a real diagonal matrix \(D\) such that

$$
A=QDQ^T.
$$

Here

$$
Q^TQ=I.
$$

The columns of \(Q\) are orthonormal eigenvectors of \(A\). The diagonal entries of \(D\) are the corresponding eigenvalues.

This is the finite-dimensional spectral theorem for real symmetric matrices. It is stronger than ordinary diagonalization because the change-of-basis matrix is orthogonal.

## 65.9 Example of Orthogonal Diagonalization

Let

$$
A=
\begin{bmatrix}
2 & 1 \\
1 & 2
\end{bmatrix}.
$$

The characteristic polynomial is

$$
\det(A-\lambda I) =
\det
\begin{bmatrix}
2-\lambda & 1 \\
1 & 2-\lambda
\end{bmatrix}.
$$

Thus

$$
\det(A-\lambda I)=(2-\lambda)^2-1.
$$

Expanding,

$$
(2-\lambda)^2-1 =
\lambda^2-4\lambda+3.
$$

Hence

$$
\lambda^2-4\lambda+3=0.
$$

The eigenvalues are

$$
\lambda_1=3,
\qquad
\lambda_2=1.
$$

For \(\lambda_1=3\), one eigenvector is

$$
\begin{bmatrix}
1 \\
1
\end{bmatrix}.
$$

Normalize it:

$$
q_1=
\frac{1}{\sqrt{2}}
\begin{bmatrix}
1 \\
1
\end{bmatrix}.
$$

For \(\lambda_2=1\), one eigenvector is

$$
\begin{bmatrix}
1 \\
-1
\end{bmatrix}.
$$

Normalize it:

$$
q_2=
\frac{1}{\sqrt{2}}
\begin{bmatrix}
1 \\
-1
\end{bmatrix}.
$$

Set

$$
Q=
\frac{1}{\sqrt{2}}
\begin{bmatrix}
1 & 1 \\
1 & -1
\end{bmatrix}
$$

and

$$
D=
\begin{bmatrix}
3 & 0 \\
0 & 1
\end{bmatrix}.
$$

Then

$$
A=QDQ^T.
$$

## 65.10 Quadratic Forms

A symmetric matrix naturally defines a quadratic form:

$$
q(x)=x^TAx.
$$

For example, if

$$
A=
\begin{bmatrix}
a & b \\
b & c
\end{bmatrix},
$$

then

$$
x^TAx =
\begin{bmatrix}
x & y
\end{bmatrix}
\begin{bmatrix}
a & b \\
b & c
\end{bmatrix}
\begin{bmatrix}
x \\
y
\end{bmatrix}.
$$

Computing gives

$$
q(x,y)=ax^2+2bxy+cy^2.
$$

The off-diagonal entries appear twice, once from each symmetric position.

Every real quadratic form can be represented by a symmetric matrix. If a non-symmetric matrix appears in \(x^TAx\), only its symmetric part matters:

$$
x^TAx=x^T\left(\frac{A+A^T}{2}\right)x.
$$

## 65.11 Positive Definite Symmetric Matrices

A real symmetric matrix \(A\) is positive definite if

$$
x^TAx>0
$$

for every nonzero vector \(x\).

It is positive semidefinite if

$$
x^TAx\geq 0
$$

for every vector \(x\).

By the spectral theorem, if

$$
A=QDQ^T,
$$

then with \(y=Q^Tx\),

$$
x^TAx=y^TDy.
$$

If

$$
D=\operatorname{diag}(\lambda_1,\ldots,\lambda_n),
$$

then

$$
y^TDy=\lambda_1y_1^2+\cdots+\lambda_ny_n^2.
$$

Therefore:

| Type | Eigenvalue condition |
|---|---|
| Positive definite | \(\lambda_i>0\) for all \(i\) |
| Positive semidefinite | \(\lambda_i\geq 0\) for all \(i\) |
| Negative definite | \(\lambda_i<0\) for all \(i\) |
| Negative semidefinite | \(\lambda_i\leq 0\) for all \(i\) |
| Indefinite | Eigenvalues of both signs |

This criterion is one of the main reasons symmetric matrices are important in optimization.

## 65.12 Symmetric Matrices in Least Squares

Symmetric matrices arise in least squares problems.

For a matrix \(A\), the normal equations are

$$
A^TAx=A^Tb.
$$

The matrix

$$
A^TA
$$

is always symmetric, because

$$
(A^TA)^T=A^T(A^T)^T=A^TA.
$$

It is also positive semidefinite, since

$$
x^TA^TAx=(Ax)^T(Ax)=\|Ax\|^2\geq 0.
$$

If the columns of \(A\) are linearly independent, then \(A^TA\) is positive definite.

Thus least squares problems naturally lead to symmetric positive definite matrices.

## 65.13 Symmetric Matrices in Optimization

Second derivatives of scalar functions are organized into Hessian matrices.

If

$$
f:\mathbb{R}^n\to\mathbb{R}
$$

has continuous second partial derivatives, its Hessian is

$$
H_f=
\begin{bmatrix}
\frac{\partial^2 f}{\partial x_1^2} &
\frac{\partial^2 f}{\partial x_1\partial x_2} &
\cdots \\
\frac{\partial^2 f}{\partial x_2\partial x_1} &
\frac{\partial^2 f}{\partial x_2^2} &
\cdots \\
\vdots & \vdots & \ddots
\end{bmatrix}.
$$

By equality of mixed partial derivatives under standard smoothness assumptions,

$$
\frac{\partial^2 f}{\partial x_i\partial x_j} =
\frac{\partial^2 f}{\partial x_j\partial x_i}.
$$

Thus the Hessian is symmetric.

The eigenvalues of the Hessian determine local curvature. Positive definite Hessians describe strict local minima. Negative definite Hessians describe strict local maxima. Indefinite Hessians describe saddle behavior.

## 65.14 Symmetric Matrices and Graphs

Undirected graphs often produce symmetric matrices.

If \(G\) is an undirected graph, its adjacency matrix \(A\) satisfies

$$
a_{ij}=a_{ji}.
$$

This is because an edge from vertex \(i\) to vertex \(j\) is also an edge from vertex \(j\) to vertex \(i\).

Thus the adjacency matrix of an undirected graph is symmetric.

The graph Laplacian

$$
L=D-A
$$

is also symmetric when the graph is undirected. Here \(D\) is the degree matrix and \(A\) is the adjacency matrix.

The eigenvalues and eigenvectors of these symmetric matrices encode connectivity, clustering, expansion, random walks, and vibration modes.

## 65.15 Symmetric Rank-One Matrices

A rank-one symmetric matrix often has the form

$$
A=uu^T.
$$

For any vector \(x\),

$$
Ax=uu^Tx.
$$

Since

$$
u^Tx
$$

is a scalar,

$$
Ax=(u^Tx)u.
$$

Thus \(A\) maps every vector onto the direction of \(u\).

The matrix \(uu^T\) is positive semidefinite because

$$
x^Tuu^Tx=(u^Tx)^2\geq 0.
$$

If \(u\neq 0\), then \(uu^T\) has rank \(1\). Its nonzero eigenvalue is

$$
\|u\|^2,
$$

with eigenvector \(u\).

## 65.16 Projection Matrices

An orthogonal projection matrix is symmetric and idempotent.

A matrix \(P\) is idempotent if

$$
P^2=P.
$$

If \(P\) is also symmetric, then it represents orthogonal projection onto a subspace.

For example, projection onto the line spanned by a unit vector \(u\) is

$$
P=uu^T.
$$

Then

$$
P^T=P
$$

and

$$
P^2=uu^Tuu^T=u(u^Tu)u^T=uu^T=P.
$$

The eigenvalues of an orthogonal projection are only \(0\) and \(1\). Vectors in the projected subspace have eigenvalue \(1\). Vectors orthogonal to it have eigenvalue \(0\).

## 65.17 Symmetric Matrices and Singular Value Decomposition

For any real matrix \(B\), the matrices

$$
B^TB
$$

and

$$
BB^T
$$

are symmetric positive semidefinite.

Indeed,

$$
(B^TB)^T=B^TB
$$

and

$$
(BB^T)^T=BB^T.
$$

The spectral theorem applies to both. The eigenvalues of \(B^TB\) are nonnegative, and their square roots are the singular values of \(B\).

Thus the singular value decomposition is built from symmetric positive semidefinite matrices.

## 65.18 Numerical Importance

Symmetric matrices are easier and safer to handle numerically than general matrices.

Eigenvalue algorithms can exploit symmetry to reduce work and improve stability. Symmetric matrices have real eigenvalues, orthogonal eigenspaces, and orthogonal diagonalizations. These properties avoid many complications of general nonsymmetric eigenvalue problems.

In numerical linear algebra, symmetric positive definite systems are especially important. They can often be solved efficiently by Cholesky decomposition or conjugate gradient methods.

The Cholesky factorization writes a symmetric positive definite matrix as

$$
A=LL^T,
$$

where \(L\) is lower triangular. This factorization is a standard tool for solving linear systems, optimization problems, and covariance computations.

## 65.19 Common Errors

The first common error is to confuse symmetric with diagonal. Every diagonal matrix is symmetric, but many symmetric matrices have nonzero off-diagonal entries.

The second common error is to assume that \(A^TA\) has the same eigenvalues as \(A\). In general, it does not.

The third common error is to use ordinary diagonalization when orthogonal diagonalization is available. For symmetric matrices, the stronger form

$$
A=QDQ^T
$$

should be used.

The fourth common error is to forget that symmetry is field-dependent. Over complex vector spaces, the correct analogue of real symmetry is usually Hermitian symmetry:

$$
A^*=A,
$$

not merely

$$
A^T=A.
$$

The fifth common error is to ignore ordering. If the first column of \(Q\) is an eigenvector for \(\lambda_1\), then the first diagonal entry of \(D\) must be \(\lambda_1\). The ordering of eigenvectors and eigenvalues must match.

## 65.20 Summary

A symmetric matrix satisfies

$$
A^T=A.
$$

Its entries mirror across the main diagonal.

Real symmetric matrices have real eigenvalues, orthogonal eigenspaces for distinct eigenvalues, and orthogonal diagonalizations of the form

$$
A=QDQ^T.
$$

They are the natural matrices for quadratic forms, least squares, optimization, graph theory, projections, and the singular value decomposition.

Symmetry is a strong structural condition. It turns many difficult matrix questions into problems about orthogonal coordinates and real eigenvalues.
