# Chapter 1. What Linear Algebra Is

# Chapter 1. What Linear Algebra Is

Linear algebra is the study of vectors, vector spaces, and linear transformations.

A vector is an object that can be added to another vector and multiplied by a scalar. In elementary geometry, a vector may be drawn as an arrow. In computation, a vector is often written as a list of numbers. In abstract mathematics, a vector may be a polynomial, a function, a matrix, or another object that obeys the same algebraic rules.

The central idea is linearity. A rule is linear when it preserves addition and scalar multiplication. If a transformation is called \(T\), then linearity means

$$
T(u + v) = T(u) + T(v)
$$

and

$$
T(cu) = cT(u).
$$

These two identities say that the transformation respects the structure of the space. It does not bend, tear, multiply coordinates together, or introduce nonlinear behavior. It sends linear combinations to linear combinations.

## 1.1 Vectors

A vector in \(\mathbb{R}^n\) is an ordered list of \(n\) real numbers:

$$
v =
\begin{bmatrix}
v_1 \\
v_2 \\
\vdots \\
v_n
\end{bmatrix}.
$$

The numbers \(v_1, v_2, \ldots, v_n\) are called the components of \(v\). The set \(\mathbb{R}^n\) contains all such vectors with \(n\) real components.

For example,

$$
\begin{bmatrix}
2 \\
-1 \\
5
\end{bmatrix}
$$

is a vector in \(\mathbb{R}^3\).

Vectors may represent many kinds of data. In geometry, they represent displacement. In physics, they represent velocity, force, or momentum. In machine learning, they represent features. In numerical computation, they represent unknowns, measurements, or coefficients.

The same algebra applies in each case.

## 1.2 Vector Addition

Two vectors of the same size can be added component by component:

$$
\begin{bmatrix}
a_1 \\
a_2 \\
\vdots \\
a_n
\end{bmatrix}
+
\begin{bmatrix}
b_1 \\
b_2 \\
\vdots \\
b_n
\end{bmatrix} =
\begin{bmatrix}
a_1 + b_1 \\
a_2 + b_2 \\
\vdots \\
a_n + b_n
\end{bmatrix}.
$$

For example,

$$
\begin{bmatrix}
1 \\
3
\end{bmatrix}
+
\begin{bmatrix}
4 \\
-2
\end{bmatrix} =
\begin{bmatrix}
5 \\
1
\end{bmatrix}.
$$

Vector addition combines displacements, quantities, or data records of the same type. The operation is simple, but it is one of the basic operations on which the rest of linear algebra is built.

## 1.3 Scalar Multiplication

A scalar is a single number. In real linear algebra, scalars are real numbers. In complex linear algebra, scalars are complex numbers.

A scalar multiplies a vector by multiplying each component:

$$
c
\begin{bmatrix}
v_1 \\
v_2 \\
\vdots \\
v_n
\end{bmatrix} =
\begin{bmatrix}
cv_1 \\
cv_2 \\
\vdots \\
cv_n
\end{bmatrix}.
$$

For example,

$$
3
\begin{bmatrix}
2 \\
-4
\end{bmatrix} =
\begin{bmatrix}
6 \\
-12
\end{bmatrix}.
$$

Scalar multiplication changes the size of a vector. If the scalar is negative, it also reverses direction in the geometric interpretation.

## 1.4 Linear Combinations

A linear combination is an expression built from vector addition and scalar multiplication.

If \(v_1, v_2, \ldots, v_k\) are vectors and \(c_1, c_2, \ldots, c_k\) are scalars, then

$$
c_1v_1 + c_2v_2 + \cdots + c_kv_k
$$

is a linear combination of the vectors.

Linear combinations are one of the most important objects in the subject. They describe all vectors that can be built from a given collection of vectors. They also lead to span, basis, dimension, rank, and many other central ideas.

For example, if

$$
v_1 =
\begin{bmatrix}
1 \\
0
\end{bmatrix},
\qquad
v_2 =
\begin{bmatrix}
0 \\
1
\end{bmatrix},
$$

then every vector in \(\mathbb{R}^2\) can be written as

$$
a v_1 + b v_2 =
\begin{bmatrix}
a \\
b
\end{bmatrix}.
$$

Thus \(v_1\) and \(v_2\) generate the whole plane.

## 1.5 Matrices

A matrix is a rectangular array of numbers:

$$
A =
\begin{bmatrix}
a_{11} & a_{12} & \cdots & a_{1n} \\
a_{21} & a_{22} & \cdots & a_{2n} \\
\vdots & \vdots & \ddots & \vdots \\
a_{m1} & a_{m2} & \cdots & a_{mn}
\end{bmatrix}.
$$

A matrix with \(m\) rows and \(n\) columns is called an \(m \times n\) matrix.

Matrices are used to store coefficients, represent systems of equations, and describe linear transformations. A matrix can act on a vector by matrix-vector multiplication. If \(A\) is an \(m \times n\) matrix and \(x\) is a vector in \(\mathbb{R}^n\), then \(Ax\) is a vector in \(\mathbb{R}^m\).

For example,

$$
\begin{bmatrix}
2 & 1 \\
3 & -1
\end{bmatrix}
\begin{bmatrix}
4 \\
5
\end{bmatrix} =
\begin{bmatrix}
2 \cdot 4 + 1 \cdot 5 \\
3 \cdot 4 + (-1) \cdot 5
\end{bmatrix} =
\begin{bmatrix}
13 \\
7
\end{bmatrix}.
$$

This operation combines the rows of the matrix with the entries of the vector.

## 1.6 Linear Equations

Linear algebra begins historically with systems of linear equations. A linear equation in variables \(x_1, x_2, \ldots, x_n\) has the form

$$
a_1x_1 + a_2x_2 + \cdots + a_nx_n = b.
$$

The variables appear only to the first power. There are no products such as \(x_1x_2\), no powers such as \(x_1^2\), and no nonlinear functions such as \(\sin x_1\).

A system of linear equations is a collection of such equations:

$$
\begin{aligned}
a_{11}x_1 + a_{12}x_2 + \cdots + a_{1n}x_n &= b_1, \\
a_{21}x_1 + a_{22}x_2 + \cdots + a_{2n}x_n &= b_2, \\
&\vdots \\
a_{m1}x_1 + a_{m2}x_2 + \cdots + a_{mn}x_n &= b_m.
\end{aligned}
$$

This system can be written compactly as

$$
Ax = b.
$$

Here \(A\) is the coefficient matrix, \(x\) is the unknown vector, and \(b\) is the right-hand side vector.

Much of linear algebra studies when this equation has a solution, whether that solution is unique, and how to compute it.

## 1.7 Linear Transformations

A linear transformation is a function between vector spaces that preserves vector addition and scalar multiplication.

A function

$$
T : V \to W
$$

is linear if for all vectors \(u, v \in V\) and all scalars \(c\),

$$
T(u + v) = T(u) + T(v)
$$

and

$$
T(cv) = cT(v).
$$

Every matrix defines a linear transformation. If \(A\) is an \(m \times n\) matrix, then

$$
T(x) = Ax
$$

defines a linear transformation from \(\mathbb{R}^n\) to \(\mathbb{R}^m\).

This gives matrices their conceptual meaning. A matrix is more than a table of numbers. It is a rule for transforming vectors.

Some matrices rotate vectors. Some stretch them. Some project them onto a line or plane. Some collapse dimensions. Some change coordinates. Linear algebra studies these transformations through their algebraic and geometric properties.

## 1.8 The Geometric View

In two and three dimensions, linear algebra has a direct geometric interpretation.

Vectors are points or arrows. Lines and planes are sets of vectors. Matrices transform space. A matrix may stretch, rotate, reflect, shear, or project.

For example, the matrix

$$
\begin{bmatrix}
2 & 0 \\
0 & 1
\end{bmatrix}
$$

doubles the first coordinate and leaves the second coordinate unchanged. It stretches the plane horizontally.

The matrix

$$
\begin{bmatrix}
0 & -1 \\
1 & 0
\end{bmatrix}
$$

rotates the plane by \(90^\circ\) counterclockwise.

The geometric view makes many ideas easier to understand. Rank measures the dimension of the output space. The null space describes directions that are collapsed to zero. Eigenvectors describe directions that keep their line under a transformation.

## 1.9 The Algebraic View

The algebraic view treats vectors and matrices as objects governed by rules.

These rules include associativity, distributivity, identities, inverses, and compatibility with scalar multiplication. They allow us to manipulate equations symbolically and prove general theorems.

For example, if \(A\) is invertible, then the equation

$$
Ax = b
$$

has the unique solution

$$
x = A^{-1}b.
$$

This statement is algebraic. It depends on the existence of an inverse matrix and the rules of matrix multiplication.

The algebraic view is especially important in higher dimensions, where geometric intuition becomes limited.

## 1.10 The Computational View

Linear algebra is also a computational subject.

Many practical problems reduce to solving equations, computing decompositions, estimating eigenvalues, or approximating high-dimensional data. Numerical linear algebra studies how to do these tasks efficiently and accurately on computers.

The equation

$$
Ax = b
$$

may involve millions or billions of unknowns. In such cases, we do not compute by hand. We use algorithms that exploit structure, sparsity, approximation, and stability.

Important computational problems include:

| Problem | Typical method |
|---|---|
| Solve \(Ax = b\) | Gaussian elimination, LU decomposition, iterative methods |
| Approximate inconsistent systems | Least squares |
| Find principal directions | Eigenvalue methods, singular value decomposition |
| Compress data | Low-rank approximation |
| Solve large sparse systems | Krylov subspace methods |
| Analyze graphs | Spectral methods |

This computational view connects linear algebra with scientific computing, statistics, optimization, machine learning, graphics, and engineering.

## 1.11 Why Linearity Matters

Linear problems are simpler than nonlinear problems because they preserve structure.

If we understand the behavior of a linear transformation on a basis, then we understand its behavior everywhere. This is a powerful reduction. Instead of studying infinitely many vectors separately, we study a finite set of basis vectors and extend by linearity.

Suppose \(v\) can be written as

$$
v = c_1v_1 + c_2v_2 + \cdots + c_nv_n.
$$

If \(T\) is linear, then

$$
T(v) = c_1T(v_1) + c_2T(v_2) + \cdots + c_nT(v_n).
$$

Thus the transformation is completely determined by what it does to the basis vectors.

This principle explains why matrices are so effective. The columns of a matrix are the images of the standard basis vectors. Once those columns are known, the entire transformation is known.

## 1.12 Main Questions of Linear Algebra

Linear algebra repeatedly asks a small number of fundamental questions.

Given a system \(Ax = b\):

| Question | Meaning |
|---|---|
| Does a solution exist? | Is \(b\) in the column space of \(A\)? |
| Is the solution unique? | Is the null space of \(A\) trivial? |
| How can we compute a solution? | Which algorithm is stable and efficient? |
| How sensitive is the solution? | How does error in \(A\) or \(b\) affect \(x\)? |

Given a matrix \(A\):

| Question | Meaning |
|---|---|
| What does \(A\) do geometrically? | Transformation view |
| What is its rank? | Dimension of its image |
| What is its null space? | Directions sent to zero |
| Is it invertible? | Can the transformation be undone? |
| What are its eigenvalues? | Fundamental scaling factors |
| Can it be decomposed? | Factorization into simpler pieces |

These questions appear throughout the book in increasingly precise forms.

## 1.13 The Scope of the Subject

Linear algebra has several layers.

At the first layer, it studies systems of equations and matrices. This layer is concrete and computational.

At the second layer, it studies vector spaces and linear transformations. This layer explains why matrix methods work.

At the third layer, it studies structure: dimension, rank, determinant, eigenvalues, inner products, orthogonality, and canonical forms.

At the fourth layer, it studies computation and applications: decompositions, numerical stability, data analysis, optimization, geometry, and differential equations.

A reference book must cover all of these layers. The elementary material gives the language. The abstract material gives the theory. The numerical material gives practical methods. The applications show why the subject appears so widely.

## 1.14 A Small Example

Consider the system

$$
\begin{aligned}
x + y &= 5, \\
2x - y &= 1.
\end{aligned}
$$

In matrix form, this is

$$
\begin{bmatrix}
1 & 1 \\
2 & -1
\end{bmatrix}
\begin{bmatrix}
x \\
y
\end{bmatrix} =
\begin{bmatrix}
5 \\
1
\end{bmatrix}.
$$

The first equation says that \((x,y)\) lies on the line \(x + y = 5\). The second equation says that \((x,y)\) lies on the line \(2x - y = 1\). Solving the system means finding the intersection of these two lines.

Add the two equations after rewriting:

$$
x + y = 5
$$

$$
2x - y = 1.
$$

Adding gives

$$
3x = 6,
$$

so

$$
x = 2.
$$

Substitute into \(x + y = 5\):

$$
2 + y = 5,
$$

so

$$
y = 3.
$$

The solution is

$$
\begin{bmatrix}
2 \\
3
\end{bmatrix}.
$$

This example contains several themes. The system has equations. The equations define geometric objects. The coefficients form a matrix. The solution is a vector. The matrix maps the unknown vector to the right-hand side. Later chapters develop these ideas systematically.

## 1.15 Summary

Linear algebra studies vectors, matrices, vector spaces, and linear transformations. Its basic operations are vector addition and scalar multiplication. Its central objects are linear combinations, systems of linear equations, matrices, subspaces, and transformations.

The subject has three major viewpoints:

| Viewpoint | Main idea |
|---|---|
| Geometric | Vectors and transformations describe space |
| Algebraic | Matrices and vector spaces obey precise rules |
| Computational | Algorithms solve large linear problems |

These viewpoints support one another. Geometry gives intuition. Algebra gives proofs. Computation gives methods. Together they make linear algebra one of the basic languages of modern mathematics, science, and engineering.
