Numerical linear algebra studies both mathematical problems and computational algorithms. These are different objects.
A mathematical problem may be sensitive to small perturbations in data. An algorithm may introduce additional error during computation. Conditioning measures the sensitivity of the problem itself. Stability measures the sensitivity introduced by the algorithm.
These two ideas form the conceptual foundation of numerical analysis.
A computation may fail for two fundamentally different reasons:
| Cause | Meaning |
|---|---|
| Ill-conditioning | The problem itself amplifies perturbations |
| Instability | The algorithm amplifies rounding error |
A stable algorithm cannot completely overcome an ill-conditioned problem. Conversely, a well-conditioned problem may still produce poor results if solved using an unstable algorithm.
Understanding this distinction is essential in matrix computations, eigenvalue problems, least squares methods, optimization, and scientific computing.
86.1 Perturbation of Data
Real computations never use exact data.
Measured quantities contain experimental error. Floating point arithmetic introduces rounding error. Stored matrices contain truncation error. Communication and conversion processes introduce additional perturbations.
Suppose the exact problem is
A computer instead solves
The quantities
represent perturbations.
The central question is:
How much does small perturbation in the input affect the output?
This question defines conditioning.
86.2 Absolute and Relative Sensitivity
Suppose a function maps data to output .
A perturbation changes the input from to .
The resulting output perturbation is
Absolute sensitivity measures:
Relative sensitivity measures:
Relative measures are usually more important because they account for scale.
86.3 Condition Number
The condition number quantifies sensitivity.
Suppose is differentiable. The relative condition number is approximately
Large condition numbers indicate high sensitivity.
Small condition numbers indicate robustness.
A condition number near means the problem is well-conditioned.
A very large condition number means small perturbations may produce large output changes.
86.4 Conditioning of Linear Systems
Consider the linear system
Suppose is invertible.
The solution is
Perturbing gives
Subtracting the original equation:
Thus
Taking norms:
Dividing by and using
we obtain
The quantity
is the condition number of the matrix.
\kappa(A)=|A|,|A^{-1}|
This is one of the central quantities in numerical linear algebra.
86.5 Interpretation of the Condition Number
The condition number measures how strongly the matrix amplifies relative perturbations.
If
then perturbations remain small.
If
then small perturbations may become large.
For example:
| Condition number | Interpretation |
|---|---|
| perfectly conditioned | |
| mild sensitivity | |
| severe sensitivity | |
| essentially singular in double precision |
An ill-conditioned matrix behaves numerically like a nearly singular matrix.
86.6 Singular Matrices and Infinite Conditioning
If is singular, then does not exist.
The condition number is therefore infinite:
Near-singular matrices have very large condition numbers.
Such systems are numerically dangerous because tiny perturbations may drastically change solutions.
86.7 Geometric Interpretation
A matrix transforms vectors geometrically.
The condition number measures how unevenly the transformation stretches space.
Suppose stretches one direction strongly and another direction weakly.
Then nearby vectors may become difficult to distinguish after inversion.
Geometrically:
| Property | Effect |
|---|---|
| Uniform scaling | well-conditioned |
| Extreme distortion | ill-conditioned |
Conditioning measures geometric distortion.
86.8 Singular Values and Conditioning
For the Euclidean norm,
where:
| Symbol | Meaning |
|---|---|
| largest singular value | |
| smallest singular value |
\kappa_2(A)=\frac{\sigma_{\max}}{\sigma_{\min}}
Thus conditioning depends on the ratio between largest and smallest scaling directions.
If the smallest singular value approaches zero, the matrix becomes nearly singular.
86.9 Example of Ill-Conditioning
Consider
The determinant is very small:
The rows are nearly linearly dependent.
A tiny perturbation in may therefore produce a large change in the solution.
This system is ill-conditioned.
Nearly dependent rows or columns are a common source of ill-conditioning.
86.10 Hilbert Matrices
A classical example is the Hilbert matrix:
For example,
Hilbert matrices are symmetric and positive definite, yet extremely ill-conditioned.
Their condition numbers grow rapidly with dimension.
They are often used to test numerical algorithms.
86.11 Forward Error and Backward Error
Suppose is the computed solution.
Forward error measures:
Backward error measures the smallest perturbation that makes exact.
That is, we seek perturbations satisfying
Forward error measures output accuracy.
Backward error measures problem perturbation.
These are different quantities.
86.12 Stability
An algorithm is stable if it does not amplify rounding errors excessively.
A backward stable algorithm produces the exact solution to a nearby problem.
Formally:
with small perturbations.
Backward stability is the standard notion of stability in numerical linear algebra.
86.13 Stability and Conditioning Together
Conditioning and stability interact through the approximate relationship
\text{forward error} \approx \text{condition number} \times \text{backward error}
This equation explains much of numerical computation.
Even a backward stable algorithm may produce large forward error if the condition number is enormous.
Thus:
| Situation | Result |
|---|---|
| Well-conditioned + stable algorithm | accurate solution |
| Ill-conditioned + stable algorithm | limited achievable accuracy |
| Well-conditioned + unstable algorithm | avoidable numerical error |
| Ill-conditioned + unstable algorithm | catastrophic behavior |
86.14 Stability of Gaussian Elimination
Gaussian elimination without pivoting may be unstable.
Small pivots generate large multipliers:
Large multipliers amplify rounding errors.
Partial pivoting improves stability by choosing larger pivots.
The resulting algorithm:
- LU factorization with partial pivoting,
- usually behaves stably in practice.
This is one of the most important algorithms in numerical linear algebra.
86.15 Orthogonal Transformations and Stability
Orthogonal matrices preserve norms:
Thus
Orthogonal transformations therefore do not amplify errors significantly.
For this reason:
| Method | Numerical behavior |
|---|---|
| Householder QR | highly stable |
| Givens rotations | stable |
| Classical Gram-Schmidt | less stable |
| Modified Gram-Schmidt | improved stability |
Orthogonality is one of the primary tools for designing stable algorithms.
86.16 Residuals
The residual measures how closely the computed solution satisfies the equation.
For
the residual is
r=b-A\hat{x}
Small residual does not necessarily imply small forward error.
If the matrix is ill-conditioned, even a tiny residual may correspond to large solution error.
Residuals therefore measure backward accuracy more directly than forward accuracy.
86.17 Sensitivity of Eigenvalues
Eigenvalue problems have their own conditioning theory.
Suppose
Perturbing changes eigenvalues and eigenvectors.
Symmetric matrices are relatively well-conditioned.
Non-normal matrices may be extremely sensitive.
Small perturbations may dramatically change eigenvalues.
This sensitivity is related to pseudospectra and non-orthogonal eigenvectors.
86.18 Least Squares Conditioning
Least squares problems minimize
\min_x |Ax-b|_2
The conditioning depends strongly on singular values.
Normal equations:
square the condition number:
This may substantially worsen numerical behavior.
QR-based least squares methods are therefore preferred.
86.19 Componentwise Conditioning
Normwise conditioning measures global perturbations.
Sometimes individual entries matter more.
Componentwise analysis studies perturbations entry by entry.
This is important in sparse matrices, scaling problems, and structured computations.
86.20 Scaling
Poor scaling often worsens conditioning.
For example, rows with vastly different magnitudes may produce unstable elimination steps.
Scaling attempts to balance magnitudes.
Typical approaches include:
| Method | Goal |
|---|---|
| Row scaling | normalize rows |
| Column scaling | normalize variables |
| Diagonal equilibration | balance matrix entries |
Good scaling improves practical numerical behavior.
86.21 Iterative Refinement
Iterative refinement improves computed solutions.
Suppose is an approximate solution.
Compute residual:
Solve correction equation:
Update:
When residuals are computed accurately, refinement may recover substantial precision.
86.22 Stability of Matrix Multiplication
Matrix multiplication is backward stable.
Computed product satisfies
with
small relative to machine precision.
Although many arithmetic operations occur, the accumulated error remains controlled.
Stable matrix multiplication is one reason dense linear algebra is reliable in practice.
86.23 Conditioning in Applications
Conditioning appears throughout applied mathematics.
| Application | Source of ill-conditioning |
|---|---|
| Data fitting | nearly dependent features |
| Inverse problems | insufficient information |
| Differential equations | stiff systems |
| Optimization | nearly singular Hessians |
| Machine learning | multicollinearity |
| Signal reconstruction | noise amplification |
Ill-conditioning is often unavoidable because it reflects intrinsic structure of the problem.
86.24 Limits of Numerical Computation
No algorithm can overcome fundamentally ill-conditioned problems completely.
Suppose perturbations of size
produce solution changes of size
Then double precision arithmetic fundamentally lacks sufficient information to determine the solution accurately.
This is not algorithmic failure. It is mathematical sensitivity.
Conditioning therefore defines intrinsic limits on achievable accuracy.
86.25 Summary
Conditioning measures sensitivity of the mathematical problem. Stability measures sensitivity introduced by the algorithm.
The condition number for a matrix is
\kappa(A)=|A|,|A^{-1}|
and governs amplification of perturbations.
Important ideas include:
| Concept | Meaning |
|---|---|
| Condition number | sensitivity of the problem |
| Stability | resistance to rounding error |
| Forward error | difference from exact solution |
| Backward error | perturbation needed for exactness |
| Residual | equation mismatch |
| Ill-conditioning | intrinsic amplification of perturbations |
Modern numerical linear algebra seeks algorithms that are:
- backward stable,
- computationally efficient,
- robust under floating point arithmetic,
- effective even for large-scale problems.
Conditioning determines what accuracy is possible. Stability determines whether the algorithm achieves it.