# 6.2 Symmetry and Invariance

## 6.2 Symmetry and Invariance

Symmetry is the presence of transformations that preserve structure. An object has symmetry when it can be changed in some way while remaining the same for the purpose being studied.

A square can be rotated or reflected and still remain a square. A graph can have its vertices renamed while preserving adjacency. A vector space can be described using different bases while preserving its linear structure.

The transformation changes representation. The invariant structure remains.

Invariance is the property of staying unchanged under a chosen class of transformations. A quantity, relation, or property is invariant when it survives the allowed changes.

For example, the area of a plane figure is invariant under rigid motion. The dimension of a vector space is invariant under linear isomorphism. The number of connected components of a graph is invariant under graph isomorphism.

Symmetry and invariance are paired ideas. Symmetry describes the transformations. Invariance describes what those transformations preserve.

A simple algebraic example is an even function. A function $f$ is even when

$$
f(-x) = f(x).
$$

The value of the function is unchanged when the input is reflected across $0$. The symmetry is $x \mapsto -x$. The invariant is the function value.

A geometric example is distance. If a transformation $T$ preserves distances, then

$$
d(Tx, Ty) = d(x,y).
$$

Such a transformation is an isometry. It may move points, but it does not change their mutual distances.

In linear algebra, change of basis is a central symmetry. A linear operator may have many matrix representations. If $A$ and $B$ represent the same operator in different bases, then

$$
B = P^{-1}AP
$$

for some invertible matrix $P$.

The entries of the matrix may change, but structural quantities such as trace, determinant, rank, and eigenvalues remain invariant under this transformation.

In graph theory, a graph automorphism is a symmetry of a graph. It is a permutation of vertices that preserves adjacency. The graph may be relabeled, but its connectivity pattern remains the same.

Symmetries often form groups. The set of all symmetries of an object can usually be composed, inverted, and given an identity transformation. This turns symmetry itself into an algebraic object.

For example, the symmetries of a square form a group under composition. Rotating by $90^\circ$, reflecting across an axis, and doing nothing are all elements of this group.

This is powerful because the study of an object can be reduced partly to the study of its symmetry group. The more symmetry an object has, the more constrained its behavior may be.

Invariants help classify objects. If two objects have different invariants, they cannot be equivalent under the transformations being considered.

For example, two finite graphs with different numbers of connected components cannot be isomorphic. Two vector spaces over the same field with different dimensions cannot be linearly isomorphic.

However, invariants may be incomplete. Two objects can share many invariants and still differ structurally. The degree sequence of a graph is invariant under graph isomorphism, but it does not fully determine the graph.

This means invariance is often a tool for separation, not full classification.

Symmetry also simplifies proofs. If a problem is symmetric under exchanging two variables, one case may represent several others. This is the logic behind phrases such as “without loss of generality.” Such phrases are valid only when the symmetry is real.

For example, if a statement is symmetric in $a$ and $b$, then proving the case $a \leq b$ may be enough, because the case $b \leq a$ follows by exchange.

But if the hypotheses treat $a$ and $b$ differently, the symmetry is absent, and the reduction is invalid.

In analysis and geometry, invariance often identifies the natural form of a theorem. A meaningful statement should not depend on arbitrary coordinates when the underlying object is coordinate-free.

For instance, the length of a vector depends on the inner product, not on the names of the coordinates. A theorem about length should remain true after an orthonormal change of basis.

In physics and applied mathematics, invariance principles are even more explicit. Conservation laws often correspond to symmetries. Translation symmetry is connected with conservation of momentum. Time symmetry is connected with conservation of energy. The mathematical pattern is that preserved transformations force stable quantities.

In computation, invariants appear as correctness conditions. During an algorithm, a loop invariant is a property that remains true before and after each iteration. It explains why the algorithm works.

For example, in Euclid's algorithm, replacing $(a,b)$ with $(b, a \bmod b)$ preserves the greatest common divisor:

$$
\gcd(a,b) = \gcd(b, a \bmod b).
$$

This invariant is the reason the algorithm returns the correct result.

Symmetry and invariance also help choose representation. If a property changes when coordinates, labels, or encodings change, it may be an artifact of representation. If it remains unchanged, it is more likely to be structural.

A practical habit is to ask: what transformations are allowed, and what survives them?

The answer identifies both the symmetry and the invariant content of the problem.

Symmetry describes freedom of representation. Invariance describes stable meaning. Together they explain how mathematics separates what changes from what matters.

