129.1 Introduction
Linear algebra can be developed over any field.
The real numbers and complex numbers are the most common scalar systems, but they are not the only ones. A vector space may be defined over the rational numbers, a finite field, a field of rational functions, a number field, or any other field.
The abstract definition of a vector space only requires a field of scalars. Once the field is fixed, the usual notions of vector addition, scalar multiplication, span, basis, dimension, linear maps, matrices, rank, nullity, and determinants remain valid. A basis is a linearly independent spanning set over the chosen field, and every vector has a unique expression as a finite linear combination of basis vectors.
What changes is not the formal structure of linear algebra. What changes is the arithmetic and the behavior of polynomials, eigenvalues, inner products, and geometry.
129.2 Fields as Scalar Systems
A field is a set with addition, subtraction, multiplication, and division by nonzero elements.
Examples include
Here denotes the field of rational functions in one variable with rational coefficients.
Once a field has been chosen, the elements of are called scalars. A vector space over is also called an -vector space.
The notation
means that vectors in may be added to each other and multiplied by scalars from .
The same set can have different dimensions over different fields. For example,
has dimension over , but dimension over . It has infinite dimension over .
Thus the field is part of the data of a vector space.
129.3 Vector Spaces over a Field
Let be a field. An -vector space is a set with two operations:
and
These operations satisfy the vector space axioms:
| Axiom | Formula |
|---|---|
| Associativity of addition | |
| Commutativity of addition | |
| Additive identity | |
| Additive inverse | |
| Compatibility of scalar multiplication | |
| Scalar identity | |
| Distributivity over vector addition | |
| Distributivity over scalar addition |
No order, distance, angle, length, or topology is required.
Those structures may be added later, but they are not part of the definition of a vector space.
129.4 Examples
Example 1. Rational Vector Spaces
The space
is a vector space over .
Its vectors have rational coordinates. The vector
belongs to , but
does not.
Example 2. Real Vector Spaces
The space
is a vector space over .
This is the usual setting for geometry, calculus, numerical computation, and many applied problems.
Example 3. Complex Vector Spaces
The space
is a vector space over .
It may also be regarded as a real vector space, but then its dimension doubles.
Example 4. Finite-Field Vector Spaces
The space
is a vector space over the finite field .
It contains exactly
vectors.
Example 5. Rational Function Vector Spaces
The space
is a vector space over the field of rational functions.
This setting appears in systems theory, algebraic geometry, and symbolic computation.
129.5 Linear Combinations
Let be a vector space over . If
and
then
is a linear combination of .
The phrase “linear combination” always depends on the scalar field.
For example, the vector
is in the real span of inside , since
But is not in the rational span of , since no rational number satisfies
Thus the same ambient set may have different spans depending on the field.
129.6 Linear Independence
A list
in an -vector space is linearly independent if
with
implies
The coefficients must come from the field .
This point matters. The numbers
are linearly independent over , because
forces
But the same two elements are linearly dependent over , because
uses real coefficients.
129.7 Basis and Dimension
A basis of an -vector space is a linearly independent subset that spans .
If is a basis, then every vector can be written uniquely as a finite linear combination of elements of . This is the usual coordinate representation.
The number of elements in a basis is the dimension of over , written
The subscript is important. It records the scalar field.
For example,
while
Similarly,
with basis
129.8 Matrices over a Field
A matrix over is an array whose entries lie in .
An matrix over represents a linear map
If
then
All matrix operations are defined using addition and multiplication in .
Gaussian elimination works over any field because every nonzero scalar has a multiplicative inverse. Thus row reduction, rank computation, solving linear systems, and finding inverses use the same formal algorithms over all fields.
The practical difference lies in the arithmetic. Over , division is ordinary real division. Over , division means multiplication by a modular inverse. Over , division means division of rational functions.
129.9 Linear Maps
Let and be vector spaces over the same field . A function
is -linear if
and
for all
The scalar field must be the same on both sides. A map may be linear over one field but not over another.
For example, complex conjugation
is linear over , since
for real . But it is not linear over , since
while
Thus linearity depends on the scalar field.
129.10 Rank and Nullity
For an -linear map
the kernel is
and the image is
The rank-nullity theorem holds over every field:
For a matrix
this becomes
The theorem depends only on the vector space axioms. It does not depend on real or complex numbers.
129.11 Determinants
The determinant of an matrix over is defined by the usual formula
This formula uses only addition, multiplication, and additive inverses, so it is valid over any field.
A square matrix is invertible exactly when
However, the behavior of signs may change in characteristic . In a field of characteristic ,
Thus the distinction between plus and minus disappears, and alternating formulas must be interpreted inside that field.
129.12 Polynomials and Eigenvalues
Let
An eigenvalue of is a scalar such that
for some nonzero vector .
Equivalently,
The characteristic polynomial
has coefficients in .
Over an arbitrary field, this polynomial may not split into linear factors. Therefore a matrix over may have no eigenvalues in .
For example, over , the rotation matrix
has characteristic polynomial
It has no real roots, so it has no real eigenvalues.
Over , the same polynomial factors:
so the matrix has eigenvalues and .
This illustrates a general principle: spectral theory depends strongly on the field.
129.13 Algebraic Closure
A field is algebraically closed if every nonconstant polynomial in has a root in .
The complex numbers are algebraically closed. Finite fields and the real numbers are not algebraically closed.
If is algebraically closed, then every square matrix over has at least one eigenvalue. More generally, every characteristic polynomial splits into linear factors.
If is not algebraically closed, eigenvalues may appear only after extending the field.
For example, the polynomial
has no root in , but it has roots in
Field extensions therefore allow additional eigenvalues and additional decompositions.
129.14 Minimal Polynomial
The minimal polynomial of a linear operator
over is the monic polynomial of least degree such that
The minimal polynomial divides every polynomial that annihilates , including the characteristic polynomial.
Over an arbitrary field, the factorization of the minimal polynomial determines what canonical forms are available.
If the minimal polynomial splits into linear factors, then Jordan theory may be used after suitable hypotheses. If it does not split, one uses rational canonical form instead.
This is one reason rational canonical form is more field-independent than Jordan canonical form.
129.15 Rational Canonical Form
Rational canonical form works over any field.
It expresses a linear operator as a block diagonal matrix made from companion matrices of polynomials in .
Unlike Jordan canonical form, rational canonical form does not require the characteristic polynomial to split.
For this reason, rational canonical form is the natural canonical form for linear algebra over arbitrary fields.
It records the action of a linear operator using invariant factors:
where
These polynomials determine the similarity class of the operator over .
129.16 Bilinear Forms
A bilinear form on an -vector space is a function
such that is linear in each variable.
That is,
and
Bilinear forms generalize dot products, but they do not necessarily define lengths or angles.
Over arbitrary fields, the idea of positivity may be unavailable. For example, a finite field has no natural order compatible with field arithmetic. Therefore inner product geometry over does not transfer directly to all fields.
Instead, one studies symmetric, alternating, Hermitian, and quadratic forms according to the algebraic structure of the field.
129.17 Characteristic
The characteristic of a field is the least positive integer such that
If no such positive integer exists, the field has characteristic .
The characteristic affects linear algebra.
In characteristic ,
Therefore
Symmetric and alternating forms also behave differently. A bilinear form satisfying
for all is alternating. Over fields of characteristic not equal to , alternating forms are skew-symmetric. In characteristic , the relation between alternating and skew-symmetric forms changes because minus signs disappear.
Thus statements involving signs often require separate treatment in characteristic .
129.18 Ordered Fields
Some fields have an order compatible with addition and multiplication.
The real numbers are ordered. The rational numbers are ordered. Finite fields cannot be ordered in a way compatible with field operations.
Ordered fields allow inequalities, positivity, and some forms of geometry.
For example, over an ordered field one can discuss whether
for every scalar . This supports part of the theory of positive definite quadratic forms.
However, notions depending on completeness, limits, orthogonal projection, or analytic convergence generally require more than an ordered field. They require additional topological or analytic structure.
129.19 Field Extensions and Restriction of Scalars
Let
be a field extension.
A vector space over can be regarded as a vector space over by restricting scalars. This usually increases dimension.
If
and
then
For example,
Conversely, one may extend scalars from to . This is often written
Extending scalars allows matrices over to be studied over a larger field . Eigenvalues that were absent over may appear over .
129.20 What Remains True over Every Field
Many theorems of linear algebra are field-independent.
| Result | Valid over every field? |
|---|---|
| Gaussian elimination | Yes |
| Basis extension theorem | Yes |
| Dimension theorem | Yes |
| Rank-nullity theorem | Yes |
| Invertibility criteria | Yes |
| Determinant criterion | Yes |
| Cayley-Hamilton theorem | Yes |
| Rational canonical form | Yes |
| Jordan form | Only when polynomial splitting conditions hold |
| Spectral theorem | Requires additional structure |
| Orthogonal projection theorem | Requires inner product and suitable geometry |
The field-independent part of linear algebra is algebraic. It uses only the field axioms and vector space axioms.
The field-dependent part involves factorization, order, conjugation, topology, or positivity.
129.21 Summary
Linear algebra over arbitrary fields keeps the same formal language as ordinary linear algebra but changes the scalar arithmetic.
The main lesson is that the field matters.
| Concept | Dependence on field |
|---|---|
| Span | Coefficients must lie in the chosen field |
| Linear independence | Depends on allowed scalars |
| Dimension | Written as |
| Matrices | Entries and arithmetic lie in |
| Eigenvalues | Must lie in unless the field is extended |
| Canonical forms | Depend on polynomial factorization over |
| Inner products | Need extra structure beyond field axioms |
| Geometry | Requires order, topology, or conjugation |
Working over an arbitrary field separates the algebraic core of linear algebra from the special features of real and complex spaces. It shows which results are purely linear and which depend on the scalar field.