# Chapter 129. Linear Algebra over Arbitrary Fields

# Chapter 129. Linear Algebra over Arbitrary Fields

## 129.1 Introduction

Linear algebra can be developed over any field.

The real numbers and complex numbers are the most common scalar systems, but they are not the only ones. A vector space may be defined over the rational numbers, a finite field, a field of rational functions, a number field, or any other field.

The abstract definition of a vector space only requires a field of scalars. Once the field is fixed, the usual notions of vector addition, scalar multiplication, span, basis, dimension, linear maps, matrices, rank, nullity, and determinants remain valid. A basis is a linearly independent spanning set over the chosen field, and every vector has a unique expression as a finite linear combination of basis vectors.

What changes is not the formal structure of linear algebra. What changes is the arithmetic and the behavior of polynomials, eigenvalues, inner products, and geometry.

## 129.2 Fields as Scalar Systems

A field \(F\) is a set with addition, subtraction, multiplication, and division by nonzero elements.

Examples include

$$
\mathbb{Q},\qquad
\mathbb{R},\qquad
\mathbb{C},\qquad
\mathbb{F}_p,\qquad
\mathbb{F}_{p^n},\qquad
\mathbb{Q}(t).
$$

Here \(\mathbb{Q}(t)\) denotes the field of rational functions in one variable with rational coefficients.

Once a field \(F\) has been chosen, the elements of \(F\) are called scalars. A vector space over \(F\) is also called an \(F\)-vector space.

The notation

$$
V \text{ is a vector space over } F
$$

means that vectors in \(V\) may be added to each other and multiplied by scalars from \(F\).

The same set can have different dimensions over different fields. For example,

$$
\mathbb{C}
$$

has dimension \(1\) over \(\mathbb{C}\), but dimension \(2\) over \(\mathbb{R}\). It has infinite dimension over \(\mathbb{Q}\).

Thus the field is part of the data of a vector space.

## 129.3 Vector Spaces over a Field

Let \(F\) be a field. An \(F\)-vector space is a set \(V\) with two operations:

$$
V \times V \to V,
\qquad
(u,v) \mapsto u+v,
$$

and

$$
F \times V \to V,
\qquad
(a,v) \mapsto av.
$$

These operations satisfy the vector space axioms:

| Axiom | Formula |
|---|---|
| Associativity of addition | \(u+(v+w)=(u+v)+w\) |
| Commutativity of addition | \(u+v=v+u\) |
| Additive identity | \(v+0=v\) |
| Additive inverse | \(v+(-v)=0\) |
| Compatibility of scalar multiplication | \(a(bv)=(ab)v\) |
| Scalar identity | \(1v=v\) |
| Distributivity over vector addition | \(a(u+v)=au+av\) |
| Distributivity over scalar addition | \((a+b)v=av+bv\) |

No order, distance, angle, length, or topology is required.

Those structures may be added later, but they are not part of the definition of a vector space.

## 129.4 Examples

### Example 1. Rational Vector Spaces

The space

$$
\mathbb{Q}^n
$$

is a vector space over \(\mathbb{Q}\).

Its vectors have rational coordinates. The vector

$$
\begin{bmatrix}
1/2\\
3/5
\end{bmatrix}
$$

belongs to \(\mathbb{Q}^2\), but

$$
\begin{bmatrix}
\sqrt{2}\\
1
\end{bmatrix}
$$

does not.

### Example 2. Real Vector Spaces

The space

$$
\mathbb{R}^n
$$

is a vector space over \(\mathbb{R}\).

This is the usual setting for geometry, calculus, numerical computation, and many applied problems.

### Example 3. Complex Vector Spaces

The space

$$
\mathbb{C}^n
$$

is a vector space over \(\mathbb{C}\).

It may also be regarded as a real vector space, but then its dimension doubles.

### Example 4. Finite-Field Vector Spaces

The space

$$
\mathbb{F}_q^n
$$

is a vector space over the finite field \(\mathbb{F}_q\).

It contains exactly

$$
q^n
$$

vectors.

### Example 5. Rational Function Vector Spaces

The space

$$
F(t)^n
$$

is a vector space over the field \(F(t)\) of rational functions.

This setting appears in systems theory, algebraic geometry, and symbolic computation.

## 129.5 Linear Combinations

Let \(V\) be a vector space over \(F\). If

$$
v_1,\ldots,v_k \in V
$$

and

$$
a_1,\ldots,a_k \in F,
$$

then

$$
a_1v_1+\cdots+a_kv_k
$$

is a linear combination of \(v_1,\ldots,v_k\).

The phrase “linear combination” always depends on the scalar field.

For example, the vector

$$
\sqrt{2}
$$

is in the real span of \(1\) inside \(\mathbb{R}\), since

$$
\sqrt{2} = \sqrt{2}\cdot 1.
$$

But \(\sqrt{2}\) is not in the rational span of \(1\), since no rational number \(q\) satisfies

$$
\sqrt{2}=q\cdot 1.
$$

Thus the same ambient set may have different spans depending on the field.

## 129.6 Linear Independence

A list

$$
v_1,\ldots,v_k
$$

in an \(F\)-vector space \(V\) is linearly independent if

$$
a_1v_1+\cdots+a_kv_k=0
$$

with

$$
a_1,\ldots,a_k \in F
$$

implies

$$
a_1=\cdots=a_k=0.
$$

The coefficients must come from the field \(F\).

This point matters. The numbers

$$
1,\sqrt{2}
$$

are linearly independent over \(\mathbb{Q}\), because

$$
a+b\sqrt{2}=0,
\qquad a,b\in\mathbb{Q},
$$

forces

$$
a=b=0.
$$

But the same two elements are linearly dependent over \(\mathbb{R}\), because

$$
\sqrt{2}\cdot 1 - 1\cdot \sqrt{2}=0
$$

uses real coefficients.

## 129.7 Basis and Dimension

A basis of an \(F\)-vector space \(V\) is a linearly independent subset that spans \(V\).

If \(B\) is a basis, then every vector \(v\in V\) can be written uniquely as a finite linear combination of elements of \(B\). This is the usual coordinate representation.

The number of elements in a basis is the dimension of \(V\) over \(F\), written

$$
\dim_F V.
$$

The subscript is important. It records the scalar field.

For example,

$$
\dim_{\mathbb{C}} \mathbb{C}^n = n,
$$

while

$$
\dim_{\mathbb{R}} \mathbb{C}^n = 2n.
$$

Similarly,

$$
\dim_{\mathbb{Q}} \mathbb{Q}(\sqrt{2}) = 2,
$$

with basis

$$
1,\sqrt{2}.
$$

## 129.8 Matrices over a Field

A matrix over \(F\) is an array whose entries lie in \(F\).

An \(m\times n\) matrix over \(F\) represents a linear map

$$
T:F^n\to F^m.
$$

If

$$
A=(a_{ij}),
$$

then

$$
T(x)=Ax.
$$

All matrix operations are defined using addition and multiplication in \(F\).

Gaussian elimination works over any field because every nonzero scalar has a multiplicative inverse. Thus row reduction, rank computation, solving linear systems, and finding inverses use the same formal algorithms over all fields.

The practical difference lies in the arithmetic. Over \(\mathbb{R}\), division is ordinary real division. Over \(\mathbb{F}_p\), division means multiplication by a modular inverse. Over \(F(t)\), division means division of rational functions.

## 129.9 Linear Maps

Let \(V\) and \(W\) be vector spaces over the same field \(F\). A function

$$
T:V\to W
$$

is \(F\)-linear if

$$
T(u+v)=T(u)+T(v)
$$

and

$$
T(av)=aT(v)
$$

for all

$$
u,v\in V,\qquad a\in F.
$$

The scalar field must be the same on both sides. A map may be linear over one field but not over another.

For example, complex conjugation

$$
T:\mathbb{C}\to\mathbb{C},
\qquad
T(z)=\overline{z},
$$

is linear over \(\mathbb{R}\), since

$$
T(az)=aT(z)
$$

for real \(a\). But it is not linear over \(\mathbb{C}\), since

$$
T(iz)=\overline{iz}=-i\overline{z},
$$

while

$$
iT(z)=i\overline{z}.
$$

Thus linearity depends on the scalar field.

## 129.10 Rank and Nullity

For an \(F\)-linear map

$$
T:V\to W,
$$

the kernel is

$$
\ker T=\{v\in V:T(v)=0\},
$$

and the image is

$$
\operatorname{im}T=\{T(v):v\in V\}.
$$

The rank-nullity theorem holds over every field:

$$
\dim_F V =
\dim_F \ker T
+
\dim_F \operatorname{im}T.
$$

For a matrix

$$
A\in M_{m,n}(F),
$$

this becomes

$$
n =
\operatorname{nullity}(A)
+
\operatorname{rank}(A).
$$

The theorem depends only on the vector space axioms. It does not depend on real or complex numbers.

## 129.11 Determinants

The determinant of an \(n\times n\) matrix over \(F\) is defined by the usual formula

$$
\det(A) =
\sum_{\sigma\in S_n}
\operatorname{sgn}(\sigma)
a_{1\sigma(1)}a_{2\sigma(2)}\cdots a_{n\sigma(n)}.
$$

This formula uses only addition, multiplication, and additive inverses, so it is valid over any field.

A square matrix \(A\in M_n(F)\) is invertible exactly when

$$
\det(A)\neq 0.
$$

However, the behavior of signs may change in characteristic \(2\). In a field of characteristic \(2\),

$$
-1=1.
$$

Thus the distinction between plus and minus disappears, and alternating formulas must be interpreted inside that field.

## 129.12 Polynomials and Eigenvalues

Let

$$
A\in M_n(F).
$$

An eigenvalue of \(A\) is a scalar \(\lambda\in F\) such that

$$
Av=\lambda v
$$

for some nonzero vector \(v\in F^n\).

Equivalently,

$$
\det(A-\lambda I)=0.
$$

The characteristic polynomial

$$
p_A(x)=\det(xI-A)
$$

has coefficients in \(F\).

Over an arbitrary field, this polynomial may not split into linear factors. Therefore a matrix over \(F\) may have no eigenvalues in \(F\).

For example, over \(\mathbb{R}\), the rotation matrix

$$
\begin{bmatrix}
0 & -1\\
1 & 0
\end{bmatrix}
$$

has characteristic polynomial

$$
x^2+1.
$$

It has no real roots, so it has no real eigenvalues.

Over \(\mathbb{C}\), the same polynomial factors:

$$
x^2+1=(x-i)(x+i),
$$

so the matrix has eigenvalues \(i\) and \(-i\).

This illustrates a general principle: spectral theory depends strongly on the field.

## 129.13 Algebraic Closure

A field \(F\) is algebraically closed if every nonconstant polynomial in \(F[x]\) has a root in \(F\).

The complex numbers are algebraically closed. Finite fields and the real numbers are not algebraically closed.

If \(F\) is algebraically closed, then every square matrix over \(F\) has at least one eigenvalue. More generally, every characteristic polynomial splits into linear factors.

If \(F\) is not algebraically closed, eigenvalues may appear only after extending the field.

For example, the polynomial

$$
x^2-2
$$

has no root in \(\mathbb{Q}\), but it has roots in

$$
\mathbb{Q}(\sqrt{2}).
$$

Field extensions therefore allow additional eigenvalues and additional decompositions.

## 129.14 Minimal Polynomial

The minimal polynomial of a linear operator

$$
T:V\to V
$$

over \(F\) is the monic polynomial \(m_T(x)\in F[x]\) of least degree such that

$$
m_T(T)=0.
$$

The minimal polynomial divides every polynomial that annihilates \(T\), including the characteristic polynomial.

Over an arbitrary field, the factorization of the minimal polynomial determines what canonical forms are available.

If the minimal polynomial splits into linear factors, then Jordan theory may be used after suitable hypotheses. If it does not split, one uses rational canonical form instead.

This is one reason rational canonical form is more field-independent than Jordan canonical form.

## 129.15 Rational Canonical Form

Rational canonical form works over any field.

It expresses a linear operator as a block diagonal matrix made from companion matrices of polynomials in \(F[x]\).

Unlike Jordan canonical form, rational canonical form does not require the characteristic polynomial to split.

For this reason, rational canonical form is the natural canonical form for linear algebra over arbitrary fields.

It records the action of a linear operator using invariant factors:

$$
f_1(x),f_2(x),\ldots,f_r(x),
$$

where

$$
f_1 \mid f_2 \mid \cdots \mid f_r.
$$

These polynomials determine the similarity class of the operator over \(F\).

## 129.16 Bilinear Forms

A bilinear form on an \(F\)-vector space \(V\) is a function

$$
B:V\times V\to F
$$

such that \(B\) is linear in each variable.

That is,

$$
B(au+bv,w)=aB(u,w)+bB(v,w),
$$

and

$$
B(w,au+bv)=aB(w,u)+bB(w,v).
$$

Bilinear forms generalize dot products, but they do not necessarily define lengths or angles.

Over arbitrary fields, the idea of positivity may be unavailable. For example, a finite field has no natural order compatible with field arithmetic. Therefore inner product geometry over \(\mathbb{R}\) does not transfer directly to all fields.

Instead, one studies symmetric, alternating, Hermitian, and quadratic forms according to the algebraic structure of the field.

## 129.17 Characteristic

The characteristic of a field \(F\) is the least positive integer \(p\) such that

$$
p\cdot 1=0.
$$

If no such positive integer exists, the field has characteristic \(0\).

The characteristic affects linear algebra.

In characteristic \(2\),

$$
-1=1.
$$

Therefore

$$
v-w=v+w.
$$

Symmetric and alternating forms also behave differently. A bilinear form satisfying

$$
B(v,v)=0
$$

for all \(v\) is alternating. Over fields of characteristic not equal to \(2\), alternating forms are skew-symmetric. In characteristic \(2\), the relation between alternating and skew-symmetric forms changes because minus signs disappear.

Thus statements involving signs often require separate treatment in characteristic \(2\).

## 129.18 Ordered Fields

Some fields have an order compatible with addition and multiplication.

The real numbers are ordered. The rational numbers are ordered. Finite fields cannot be ordered in a way compatible with field operations.

Ordered fields allow inequalities, positivity, and some forms of geometry.

For example, over an ordered field one can discuss whether

$$
x^2 \geq 0
$$

for every scalar \(x\). This supports part of the theory of positive definite quadratic forms.

However, notions depending on completeness, limits, orthogonal projection, or analytic convergence generally require more than an ordered field. They require additional topological or analytic structure.

## 129.19 Field Extensions and Restriction of Scalars

Let

$$
F\subseteq K
$$

be a field extension.

A vector space over \(K\) can be regarded as a vector space over \(F\) by restricting scalars. This usually increases dimension.

If

$$
[K:F]=d
$$

and

$$
\dim_K V=n,
$$

then

$$
\dim_F V=dn.
$$

For example,

$$
\dim_{\mathbb{R}}\mathbb{C}^n=2n.
$$

Conversely, one may extend scalars from \(F\) to \(K\). This is often written

$$
V_K = V\otimes_F K.
$$

Extending scalars allows matrices over \(F\) to be studied over a larger field \(K\). Eigenvalues that were absent over \(F\) may appear over \(K\).

## 129.20 What Remains True over Every Field

Many theorems of linear algebra are field-independent.

| Result | Valid over every field? |
|---|---|
| Gaussian elimination | Yes |
| Basis extension theorem | Yes |
| Dimension theorem | Yes |
| Rank-nullity theorem | Yes |
| Invertibility criteria | Yes |
| Determinant criterion | Yes |
| Cayley-Hamilton theorem | Yes |
| Rational canonical form | Yes |
| Jordan form | Only when polynomial splitting conditions hold |
| Spectral theorem | Requires additional structure |
| Orthogonal projection theorem | Requires inner product and suitable geometry |

The field-independent part of linear algebra is algebraic. It uses only the field axioms and vector space axioms.

The field-dependent part involves factorization, order, conjugation, topology, or positivity.

## 129.21 Summary

Linear algebra over arbitrary fields keeps the same formal language as ordinary linear algebra but changes the scalar arithmetic.

The main lesson is that the field matters.

| Concept | Dependence on field |
|---|---|
| Span | Coefficients must lie in the chosen field |
| Linear independence | Depends on allowed scalars |
| Dimension | Written as \(\dim_F V\) |
| Matrices | Entries and arithmetic lie in \(F\) |
| Eigenvalues | Must lie in \(F\) unless the field is extended |
| Canonical forms | Depend on polynomial factorization over \(F\) |
| Inner products | Need extra structure beyond field axioms |
| Geometry | Requires order, topology, or conjugation |

Working over an arbitrary field separates the algebraic core of linear algebra from the special features of real and complex spaces. It shows which results are purely linear and which depend on the scalar field.
