# Chapter 81. Polar Decomposition

# Chapter 81. Polar Decomposition

Polar decomposition factors a matrix into a direction-preserving part and a positive semidefinite stretching part. It is the matrix analogue of writing a complex number in polar form.

For a complex square matrix \(A\), a polar decomposition has the form

$$
A = UP,
$$

where \(U\) is unitary and \(P\) is Hermitian positive semidefinite. For a real square matrix, \(U\) is orthogonal and \(P\) is symmetric positive semidefinite. The factor \(P\) is unique and is given by \((A^*A)^{1/2}\). If \(A\) is invertible, then \(U\) is also unique.

## 81.1 Analogy with Complex Numbers

Every nonzero complex number \(z\) can be written as

$$
z = u r,
$$

where

$$
|u| = 1,
\qquad
r = |z| > 0.
$$

The number \(u\) records direction or phase. The number \(r\) records magnitude.

Polar decomposition gives the same idea for matrices:

$$
A = UP.
$$

The factor \(U\) plays the role of the unit complex number. It preserves length. The factor \(P\) plays the role of the nonnegative magnitude. It stretches space along orthogonal directions.

This analogy is exact in spirit but richer in structure. A complex number has one magnitude. A matrix may stretch different directions by different amounts.

## 81.2 Positive Semidefinite Factor

The positive factor in the right polar decomposition is

$$
P = (A^*A)^{1/2}.
$$

The matrix \(A^*A\) is Hermitian positive semidefinite. Therefore it has a unique Hermitian positive semidefinite square root. This square root is \(P\).

For real matrices, the formula becomes

$$
P = (A^TA)^{1/2}.
$$

The factor \(P\) is positive semidefinite because

$$
x^*Px \ge 0
$$

for every vector \(x\). If \(A\) is invertible, then \(P\) is positive definite.

## 81.3 Unitary or Orthogonal Factor

If \(A\) is invertible, then \(P\) is invertible, and the unitary factor is

$$
U = AP^{-1}.
$$

To see why \(U\) is unitary, compute

$$
U^*U =
P^{-1}A^*AP^{-1}.
$$

Since

$$
P^2 = A^*A,
$$

we get

$$
U^*U =
P^{-1}P^2P^{-1} =
I.
$$

Thus \(U\) preserves inner products and lengths.

For real matrices, the same calculation gives

$$
U^TU = I,
$$

so \(U\) is orthogonal.

## 81.4 Geometric Meaning

For a real square matrix, polar decomposition separates a linear transformation into two parts:

$$
A = UP.
$$

The factor \(P\) stretches or compresses space along orthogonal axes. Since \(P\) is symmetric positive semidefinite, it has an orthonormal eigenbasis and nonnegative eigenvalues.

The factor \(U\) then rotates or reflects the result. Since \(U\) is orthogonal, it preserves distances and angles.

Thus \(A\) acts as:

| Stage | Operation |
|---|---|
| \(P\) | Stretch along orthogonal directions |
| \(U\) | Rotate or reflect |

This gives a clean geometric interpretation. A general linear transformation may shear, stretch, rotate, and reflect. Polar decomposition expresses the same transformation as pure symmetric stretching followed by a rigid motion.

## 81.5 Right and Left Polar Decomposition

There are two common forms.

The right polar decomposition is

$$
A = UP,
\qquad
P = (A^*A)^{1/2}.
$$

The left polar decomposition is

$$
A = P' U,
\qquad
P' = (AA^*)^{1/2}.
$$

Both use the same unitary factor \(U\) when \(A\) is invertible. The difference is whether the positive semidefinite factor acts before or after the unitary factor.

The two positive factors are related by

$$
P' = UPU^*.
$$

Thus \(P\) and \(P'\) have the same eigenvalues, but their eigenvectors live naturally in different coordinate systems.

## 81.6 Relation to the Singular Value Decomposition

Suppose

$$
A = W\Sigma V^*
$$

is a singular value decomposition. Then the right polar decomposition is obtained by grouping the factors as

$$
A = (WV^*)(V\Sigma V^*).
$$

Thus

$$
U = WV^*
$$

and

$$
P = V\Sigma V^*.
$$

The left polar decomposition is

$$
A = (W\Sigma W^*)(WV^*),
$$

so

$$
P' = W\Sigma W^*.
$$

This shows that polar decomposition is closely connected to the SVD. The singular values become the eigenvalues of the positive semidefinite factor. The singular vectors determine the axes of stretching.

## 81.7 Example: Diagonal Matrix

Let

$$
A =
\begin{bmatrix}
3 & 0 \\
0 & 2
\end{bmatrix}.
$$

This matrix is already symmetric positive definite. Therefore

$$
P = A,
\qquad
U = I.
$$

Thus

$$
A = IP.
$$

In this case, the transformation is pure stretching. There is no rotation or reflection.

Now let

$$
A =
\begin{bmatrix}
-3 & 0 \\
0 & 2
\end{bmatrix}.
$$

Then

$$
P =
\begin{bmatrix}
3 & 0 \\
0 & 2
\end{bmatrix},
\qquad
U =
\begin{bmatrix}
-1 & 0 \\
0 & 1
\end{bmatrix}.
$$

The factor \(P\) stretches by \(3\) and \(2\). The factor \(U\) reflects across the second coordinate axis.

## 81.8 Example: Rotation Followed by Stretching

Let

$$
R =
\begin{bmatrix}
0 & -1 \\
1 & 0
\end{bmatrix},
\qquad
P =
\begin{bmatrix}
4 & 0 \\
0 & 1
\end{bmatrix}.
$$

Define

$$
A = RP.
$$

Then

$$
A =
\begin{bmatrix}
0 & -1 \\
1 & 0
\end{bmatrix}
\begin{bmatrix}
4 & 0 \\
0 & 1
\end{bmatrix} =
\begin{bmatrix}
0 & -1 \\
4 & 0
\end{bmatrix}.
$$

Since \(R\) is orthogonal and \(P\) is symmetric positive definite,

$$
A = RP
$$

is already a polar decomposition.

The matrix first stretches the first coordinate by \(4\), leaves the second coordinate unchanged, and then rotates the result by \(90^\circ\).

## 81.9 Computing \(P\) Directly

For an invertible real matrix \(A\), the positive factor is

$$
P = (A^TA)^{1/2}.
$$

This means that \(P\) is the unique symmetric positive definite matrix satisfying

$$
P^2 = A^TA.
$$

If \(A^TA\) has spectral decomposition

$$
A^TA = V\Lambda V^T,
$$

where

$$
\Lambda =
\operatorname{diag}(\lambda_1,\ldots,\lambda_n),
\qquad
\lambda_i > 0,
$$

then

$$
P =
V\Lambda^{1/2}V^T.
$$

Here

$$
\Lambda^{1/2} =
\operatorname{diag}
(\sqrt{\lambda_1},\ldots,\sqrt{\lambda_n}).
$$

Once \(P\) is computed, the orthogonal factor is

$$
U = AP^{-1}.
$$

## 81.10 A Complete Two by Two Example

Let

$$
A =
\begin{bmatrix}
1 & 1 \\
0 & 2
\end{bmatrix}.
$$

First compute

$$
A^TA =
\begin{bmatrix}
1 & 0 \\
1 & 2
\end{bmatrix}
\begin{bmatrix}
1 & 1 \\
0 & 2
\end{bmatrix} =
\begin{bmatrix}
1 & 1 \\
1 & 5
\end{bmatrix}.
$$

The positive factor is

$$
P = (A^TA)^{1/2}.
$$

To compute it explicitly, diagonalize \(A^TA\). Its characteristic polynomial is

$$
\det
\begin{bmatrix}
1-\lambda & 1 \\
1 & 5-\lambda
\end{bmatrix} =
(1-\lambda)(5-\lambda)-1.
$$

Expanding,

$$
(1-\lambda)(5-\lambda)-1 =
\lambda^2 - 6\lambda + 4.
$$

Thus

$$
\lambda = 3 \pm \sqrt{5}.
$$

The singular values of \(A\) are

$$
\sigma_1 = \sqrt{3+\sqrt{5}},
\qquad
\sigma_2 = \sqrt{3-\sqrt{5}}.
$$

The factor \(P\) has these eigenvalues. The orthogonal factor is then obtained from

$$
U = AP^{-1}.
$$

This example shows the general pattern. The positive factor is obtained from \(A^TA\). The orthogonal factor is what remains after removing the stretching.

## 81.11 Singular Matrices

If \(A\) is singular, the positive factor

$$
P = (A^*A)^{1/2}
$$

still exists and is unique. However, \(P\) is not invertible, so the formula

$$
U = AP^{-1}
$$

cannot be used.

In this case, the role of \(U\) is replaced by a partial isometry in the most general formulation. For square matrices, one may still choose a unitary \(U\) so that

$$
A = UP,
$$

but this \(U\) may not be unique.

The nonuniqueness occurs because directions in the null space of \(A\) are collapsed to zero. On those directions, the positive factor \(P\) loses information, so the unitary factor has freedom that does not affect the product.

## 81.12 Rectangular Matrices

Polar decomposition also extends to rectangular matrices.

For

$$
A \in \mathbb{C}^{m \times n},
$$

one may write

$$
A = UP,
$$

where \(P\) is an \(n \times n\) Hermitian positive semidefinite matrix and \(U\) is an \(m \times n\) partial isometry or semi-unitary factor, depending on rank and dimensions. The positive factor is still

$$
P = (A^*A)^{1/2}.
$$

This rectangular form is useful when a linear map goes between spaces of different dimensions. The positive factor acts on the input space, while the partial isometry maps the stretched input directions into the output space.

## 81.13 Polar Decomposition and Normal Matrices

A matrix \(A\) is normal if

$$
A^*A = AA^*.
$$

Suppose

$$
A = UP
$$

is its polar decomposition. A useful characterization is that \(A\) is normal exactly when \(U\) and \(P\) commute:

$$
UP = PU.
$$

When this happens, the unitary part and the positive part share a compatible spectral structure. The action of \(A\) can then be understood as phase rotation and magnitude scaling along common orthogonal directions.

For nonnormal matrices, the unitary and positive factors do not generally commute. Their order matters.

## 81.14 Determinant

For a square matrix with polar decomposition

$$
A = UP,
$$

the determinant satisfies

$$
\det(A)=\det(U)\det(P).
$$

If \(U\) is unitary, then

$$
|\det(U)| = 1.
$$

If \(P\) is positive semidefinite, then

$$
\det(P) \ge 0.
$$

When \(A\) is invertible,

$$
\det(P) = |\det(A)|.
$$

Thus the polar decomposition separates the determinant into phase and magnitude:

$$
\det(A) =
\det(U)\,|\det(A)|.
$$

This mirrors the polar form of a complex number.

## 81.15 Best Orthogonal Approximation

Polar decomposition also has an approximation property. For a nonsingular real matrix \(A\), the orthogonal polar factor \(U\) is the closest orthogonal matrix to \(A\) in the Frobenius norm.

That is, among orthogonal matrices \(Q\), the polar factor solves

$$
\min_{Q^TQ=I} \|A-Q\|_F.
$$

This is the orthogonal Procrustes problem in one of its standard forms. It appears in shape matching, rigid registration, computer graphics, continuum mechanics, and numerical analysis.

The reason is again the SVD. If

$$
A = W\Sigma V^T,
$$

then

$$
U = WV^T.
$$

The factor \(U\) keeps the rotational part of \(A\) and discards the nonuniform stretching.

## 81.16 Relation to Cholesky and Square Roots

Cholesky decomposition factors a symmetric positive definite matrix as

$$
B = LL^T.
$$

Polar decomposition uses the symmetric positive semidefinite square root

$$
P = (A^TA)^{1/2}.
$$

These square roots serve different purposes.

| Object | Role |
|---|---|
| Cholesky factor \(L\) | Triangular square root of a positive definite matrix |
| Polar factor \(P\) | Symmetric positive semidefinite square root of \(A^TA\) |
| SVD singular values | Eigenvalues of \(P\) |
| Orthogonal factor \(U\) | Rigid part of \(A\) |

The Cholesky factor is triangular and efficient for solving systems. The polar positive factor is symmetric and geometrically canonical.

## 81.17 Numerical Computation

A common way to compute the polar decomposition is through the SVD:

$$
A = W\Sigma V^*.
$$

Then

$$
U = WV^*,
\qquad
P = V\Sigma V^*.
$$

This method is reliable but may be more expensive than necessary if only the polar factor is needed.

There are also iterative methods for approximating \(U\). One classical iteration starts with

$$
U_0 = A
$$

and repeats

$$
U_{k+1} =
\frac{1}{2}
\left(
U_k + (U_k^*)^{-1}
\right).
$$

For suitable nonsingular matrices, this iteration drives the singular values toward \(1\), while preserving the singular vector structure. The limit is the unitary polar factor. More advanced scaled and higher-order variants are used in practical numerical algorithms.

## 81.18 Applications

Polar decomposition appears whenever one wants to separate rotation from strain or shape change.

| Area | Use |
|---|---|
| Continuum mechanics | Separate deformation into rotation and stretch |
| Computer graphics | Extract rotation from an affine transform |
| Robotics | Normalize near-rotation matrices |
| Numerical linear algebra | Compute matrix sign functions and orthogonal factors |
| Optimization | Project matrices onto the orthogonal group |
| Statistics | Analyze covariance-related transformations |
| Quantum mechanics | Separate unitary and positive operator parts |

The common theme is structural separation. The unitary or orthogonal factor represents orientation. The positive semidefinite factor represents magnitude and stretching.

## 81.19 Comparison with SVD

| Feature | Polar decomposition | SVD |
|---|---|---|
| Form | \(A=UP\) | \(A=W\Sigma V^*\) |
| Orthogonal or unitary factors | One | Two |
| Positive factor | Matrix \(P\) | Diagonal \(\Sigma\) |
| Always exists | Yes | Yes |
| Shows singular values directly | Through eigenvalues of \(P\) | Directly on diagonal |
| Main use | Rotation-stretch separation | Rank, conditioning, low-rank structure |

SVD gives more detailed information. Polar decomposition gives a simpler geometric split. In fact, polar decomposition can be derived directly from SVD by combining the two unitary factors.

## 81.20 Summary

Polar decomposition factors a matrix into a unitary or orthogonal part and a positive semidefinite part:

$$
A = UP.
$$

The positive factor is

$$
P = (A^*A)^{1/2}.
$$

For invertible \(A\), the unitary factor is uniquely determined by

$$
U = AP^{-1}.
$$

For real matrices, this becomes an orthogonal factor times a symmetric positive semidefinite factor.

Geometrically, polar decomposition says that a linear transformation can be viewed as a stretch along orthogonal directions followed by a rotation or reflection. Algebraically, it is closely tied to the singular value decomposition. Computationally, it is useful when one wants the nearest orthogonal factor, a rotation-stretch separation, or a canonical positive factor associated with a matrix.
