Skip to content

Chapter 117. Robotics and Kinematics

Robotics studies machines that sense, move, compute, and act in the physical world. A robot may be an industrial arm, a mobile platform, a drone, a humanoid, a surgical mechanism, or a small embedded device with motors and sensors.

Kinematics studies motion without considering forces. It asks where each part of a robot is, how fast it is moving, and how joint motion affects the position and orientation of the end effector.

Linear algebra is the main language of robot kinematics. Positions are vectors. Orientations are rotation matrices. Rigid motions are homogeneous transformation matrices. Velocities are represented by Jacobians. Calibration and estimation problems become matrix equations and least squares problems.

A serial robot arm is a chain of links connected by joints. The central kinematic question is:

Given the joint variables, where is the end effector?

This is the forward kinematics problem.

The inverse question is:

Given a desired end-effector pose, what joint variables produce it?

This is the inverse kinematics problem.

Both problems are expressed using vectors, matrices, coordinate frames, and transformations.

117.1 Configuration

The configuration of a robot is a complete specification of its movable degrees of freedom.

For a robot arm with nn joints, the configuration is usually written as

q=[q1q2qn]. q = \begin{bmatrix} q_1 \\ q_2 \\ \vdots \\ q_n \end{bmatrix}.

The vector qq is called the joint vector.

Each component qiq_i may represent a rotation angle or a displacement, depending on the joint type.

Joint typeVariableMotion
RevoluteAngle θ\thetaRotation about an axis
PrismaticDistance ddTranslation along an axis
FixedNoneNo relative motion

The set of all possible configurations is called the configuration space.

For a simple planar arm with two rotating joints,

q=[θ1θ2]. q = \begin{bmatrix} \theta_1 \\ \theta_2 \end{bmatrix}.

The configuration space is the set of possible angle pairs.

117.2 Links and Joints

A robot manipulator is built from links and joints.

A link is a rigid body. A joint constrains the relative motion between two links.

In a serial manipulator, links form a chain:

baselink 1link 2end effector. \text{base} \to \text{link 1} \to \text{link 2} \to \cdots \to \text{end effector}.

The end effector is the tool or terminal frame of the robot. It may be a gripper, welding torch, camera, drill, sensor, or other device.

Kinematics assumes that each link is rigid. Therefore the position of any point on a link is determined by the pose of that link.

A pose consists of position and orientation.

117.3 Coordinate Frames

A coordinate frame consists of an origin and basis vectors.

In three dimensions, a frame may be written as

F=(o;e1,e2,e3), \mathcal{F} = (o; e_1,e_2,e_3),

where oo is the origin and e1,e2,e3e_1,e_2,e_3 are orthonormal basis vectors.

Robotics assigns coordinate frames to the base, joints, links, sensors, and end effector.

A point may have different coordinate vectors in different frames. If pAp^A is the coordinate vector of a point expressed in frame AA, and pBp^B is the same point expressed in frame BB, then a transformation relates them.

Coordinate frames are essential because robot motion is relative. A camera may be mounted on the wrist. A tool may be mounted on a flange. A target may be expressed in a world frame. Kinematics keeps these representations consistent.

117.4 Rotation Matrices

A rotation matrix represents orientation.

In three dimensions, a rotation matrix RR is a 3×33 \times 3 matrix satisfying

RTR=I, R^TR = I,

and

detR=1. \det R = 1.

The columns of RR are the basis vectors of one frame expressed in another frame.

If vBv^B is a vector expressed in frame BB, and RABR_{AB} maps coordinates from frame BB to frame AA, then

vA=RABvB. v^A = R_{AB}v^B.

Rotation matrices are orthogonal. Therefore

R1=RT. R^{-1} = R^T.

This identity is used constantly in robotics because changing direction between frames often requires the inverse orientation.

117.5 Planar Rotations

In two dimensions, rotation by angle θ\theta is represented by

R(θ)=[cosθsinθsinθcosθ]. R(\theta) = \begin{bmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{bmatrix}.

This matrix rotates vectors counterclockwise by θ\theta.

For example, the unit vector along the xx-axis is

e1=[10]. e_1 = \begin{bmatrix} 1 \\ 0 \end{bmatrix}.

After rotation,

R(θ)e1=[cosθsinθ]. R(\theta)e_1 = \begin{bmatrix} \cos\theta \\ \sin\theta \end{bmatrix}.

Thus the first column of the rotation matrix is the direction of the rotated xx-axis.

The second column is the direction of the rotated yy-axis.

117.6 Rigid Transformations

A rigid transformation combines rotation and translation.

It maps a point by

p=Rp+t, p' = Rp + t,

where RR is a rotation matrix and tt is a translation vector.

A rigid transformation preserves distances and angles.

Using homogeneous coordinates, it is written as

T=[Rt01]. T = \begin{bmatrix} R & t \\ 0 & 1 \end{bmatrix}.

For three-dimensional robotics, TT is a 4×44 \times 4 matrix:

T=[r11r12r13txr21r22r23tyr31r32r33tz0001]. T = \begin{bmatrix} r_{11} & r_{12} & r_{13} & t_x \\ r_{21} & r_{22} & r_{23} & t_y \\ r_{31} & r_{32} & r_{33} & t_z \\ 0 & 0 & 0 & 1 \end{bmatrix}.

Homogeneous transformation matrices are a standard representation for robot kinematics and calibration; for example, hand-eye calibration is commonly expressed by equations involving unknown homogeneous transformation matrices.

117.7 Homogeneous Coordinates

A point in three dimensions is represented in homogeneous coordinates as

p~=[xyz1]. \tilde{p} = \begin{bmatrix} x \\ y \\ z \\ 1 \end{bmatrix}.

Then

p~=Tp~ \tilde{p}' = T\tilde{p}

means

[p1]=[Rt01][p1]. \begin{bmatrix} p' \\ 1 \end{bmatrix} = \begin{bmatrix} R & t \\ 0 & 1 \end{bmatrix} \begin{bmatrix} p \\ 1 \end{bmatrix}.

The result is

p=Rp+t. p'=Rp+t.

A direction vector is represented with final coordinate 00:

v~=[v0]. \tilde{v} = \begin{bmatrix} v \\ 0 \end{bmatrix}.

Then

Tv~=[Rv0]. T\tilde{v} = \begin{bmatrix} Rv \\ 0 \end{bmatrix}.

The translation has no effect on directions.

117.8 Composition of Poses

Rigid transformations compose by matrix multiplication.

If TABT_{AB} maps coordinates from frame BB to frame AA, and TBCT_{BC} maps coordinates from frame CC to frame BB, then

TAC=TABTBC. T_{AC}=T_{AB}T_{BC}.

This means that coordinates transform as

pA=TABpB, p^A = T_{AB}p^B,

and

pB=TBCpC. p^B = T_{BC}p^C.

Therefore

pA=TABTBCpC. p^A = T_{AB}T_{BC}p^C.

For a serial robot, the end-effector pose is obtained by multiplying the transformations from one link to the next.

117.9 Forward Kinematics

Forward kinematics maps joint variables to the end-effector pose.

For a serial chain with nn joints,

T0n(q)=T01(q1)T12(q2)Tn1,n(qn). T_{0n}(q) = T_{01}(q_1)T_{12}(q_2)\cdots T_{n-1,n}(q_n).

Here T0nT_{0n} is the pose of the end effector relative to the base.

The function

qT0n(q) q \mapsto T_{0n}(q)

is usually nonlinear, even though each pose is represented by matrices.

The nonlinearity comes from sine and cosine terms in rotations and from products of transformations.

Forward kinematics is deterministic: if the geometry and joint values are known, the pose can be computed.

117.10 Example: Planar Two-Link Arm

Consider a planar arm with two revolute joints.

Let the link lengths be l1l_1 and l2l_2, and let the joint angles be θ1\theta_1 and θ2\theta_2.

The end-effector position is

x=l1cosθ1+l2cos(θ1+θ2), x = l_1\cos\theta_1 + l_2\cos(\theta_1+\theta_2), y=l1sinθ1+l2sin(θ1+θ2). y = l_1\sin\theta_1 + l_2\sin(\theta_1+\theta_2).

This is the forward kinematics map

f(θ1,θ2)=[xy]. f(\theta_1,\theta_2) = \begin{bmatrix} x \\ y \end{bmatrix}.

The first link rotates by θ1\theta_1. The second link rotates relative to the first by θ2\theta_2. Therefore its absolute orientation is θ1+θ2\theta_1+\theta_2.

Even in this simple example, the map from joint space to workspace is nonlinear.

117.11 Workspace

The workspace of a robot is the set of poses or positions the end effector can reach.

For the planar two-link arm, the reachable positions form an annular region when both joints can rotate fully.

The maximum distance from the base is

l1+l2. l_1+l_2.

The minimum distance is

l1l2. |l_1-l_2|.

Thus the reachable radius rr satisfies

l1l2rl1+l2. |l_1-l_2| \leq r \leq l_1+l_2.

Joint limits, obstacles, and mechanical constraints reduce the workspace.

Workspace analysis is important in robot design because it determines where the robot can operate.

117.12 Inverse Kinematics

Inverse kinematics asks for joint variables that produce a desired end-effector pose.

Given a target pose TdT_d, find qq such that

T0n(q)=Td. T_{0n}(q)=T_d.

This problem can be difficult.

There may be no solution, one solution, finitely many solutions, or infinitely many solutions.

For a redundant robot, there are more joint variables than task constraints. Such a robot may have infinitely many configurations that produce the same end-effector pose.

For many serial robots, inverse kinematics is divided into position and orientation subproblems when the geometry permits it. This split is common in manipulators with spherical wrists.

117.13 Inverse Kinematics for the Two-Link Arm

For the planar two-link arm, suppose the target point is

(x,y). (x,y).

Let

r2=x2+y2. r^2=x^2+y^2.

By the law of cosines,

cosθ2=r2l12l222l1l2. \cos\theta_2 = \frac{r^2-l_1^2-l_2^2}{2l_1l_2}.

If the right-hand side lies outside [1,1][-1,1], the target is unreachable.

Otherwise,

θ2=±arccos(r2l12l222l1l2). \theta_2 = \pm \arccos \left( \frac{r^2-l_1^2-l_2^2}{2l_1l_2} \right).

The two signs correspond to elbow-up and elbow-down configurations.

Then θ1\theta_1 can be found by subtracting the internal geometry angle from the target direction.

This example shows that inverse kinematics may have multiple valid solutions.

117.14 Differential Kinematics

Differential kinematics studies velocities.

Let

x=f(q) x = f(q)

be the end-effector position or task vector.

For small changes,

ΔxJ(q)Δq. \Delta x \approx J(q)\Delta q.

Here J(q)J(q) is the Jacobian matrix:

J(q)=fq. J(q) = \frac{\partial f}{\partial q}.

For velocities,

x˙=J(q)q˙. \dot{x}=J(q)\dot{q}.

The Jacobian maps joint velocities to end-effector velocities.

Differential kinematics supports numerical inverse kinematics, resolved-rate motion control, and manipulability analysis. Modern robotics literature often emphasizes systematic computation of manipulator Jacobians for these purposes.

117.15 Jacobian of the Two-Link Arm

For the planar two-link arm,

x=l1cosθ1+l2cos(θ1+θ2), x = l_1\cos\theta_1 + l_2\cos(\theta_1+\theta_2), y=l1sinθ1+l2sin(θ1+θ2). y = l_1\sin\theta_1 + l_2\sin(\theta_1+\theta_2).

The Jacobian is

J(θ1,θ2)=[xθ1xθ2yθ1yθ2]. J(\theta_1,\theta_2) = \begin{bmatrix} \frac{\partial x}{\partial \theta_1} & \frac{\partial x}{\partial \theta_2} \\ \frac{\partial y}{\partial \theta_1} & \frac{\partial y}{\partial \theta_2} \end{bmatrix}.

Therefore

J=[l1sinθ1l2sin(θ1+θ2)l2sin(θ1+θ2)l1cosθ1+l2cos(θ1+θ2)l2cos(θ1+θ2)]. J = \begin{bmatrix} -l_1\sin\theta_1-l_2\sin(\theta_1+\theta_2) & -l_2\sin(\theta_1+\theta_2) \\ l_1\cos\theta_1+l_2\cos(\theta_1+\theta_2) & l_2\cos(\theta_1+\theta_2) \end{bmatrix}.

This matrix tells how small changes in the two joint angles move the end effector in the plane.

117.16 Singular Configurations

A singular configuration occurs when the Jacobian loses rank.

At such a configuration, some end-effector velocities cannot be produced, or some joint motions do not move the end effector in the desired task directions.

For a square Jacobian, singularity means

detJ(q)=0. \det J(q)=0.

For the two-link planar arm, the Jacobian determinant is

detJ=l1l2sinθ2. \det J = l_1l_2\sin\theta_2.

Thus singularities occur when

θ2=0 \theta_2=0

or

θ2=π. \theta_2=\pi.

These are configurations where the two links are collinear.

At a singularity, inverse velocity kinematics becomes ill-conditioned.

Small desired end-effector motions may require very large joint velocities.

117.17 Velocity Inverse Kinematics

Suppose we want an end-effector velocity x˙\dot{x}.

The differential relation is

x˙=J(q)q˙. \dot{x}=J(q)\dot{q}.

If JJ is square and invertible, then

q˙=J(q)1x˙. \dot{q}=J(q)^{-1}\dot{x}.

If JJ is not square or is rank deficient, the Moore-Penrose pseudoinverse is commonly used:

q˙=J(q)+x˙. \dot{q}=J(q)^+\dot{x}.

For redundant robots, this gives the minimum-norm joint velocity among all solutions.

If exact tracking is impossible, the pseudoinverse gives a least squares solution.

Thus velocity inverse kinematics is a direct application of linear systems, rank, and least squares.

117.18 Redundancy

A robot is redundant for a task if it has more joint degrees of freedom than required task dimensions.

For example, a 7-joint arm controlling a 6-dimensional end-effector pose has one extra degree of freedom.

Redundancy allows secondary objectives:

ObjectiveExample
Avoid joint limitsStay near middle of joint ranges
Avoid obstaclesMove elbow away from collision
Improve manipulabilityStay away from singularities
Reduce energyPrefer small joint motion
Maintain postureKeep a comfortable configuration

In linear algebra terms, redundancy appears as a nontrivial null space of the Jacobian.

If

Jz=0, Jz=0,

then zz is a joint velocity that produces no first-order end-effector motion.

Such motion can be used for secondary tasks.

117.19 Null-Space Motion

For a redundant robot, the general velocity solution has the form

q˙=J+x˙+(IJ+J)z. \dot{q} = J^+\dot{x} + (I-J^+J)z.

Here zz is an arbitrary joint velocity vector.

The matrix

IJ+J I-J^+J

projects onto the null space of JJ.

The first term performs the desired task motion. The second term changes the robot posture without changing the end-effector velocity to first order.

This formula is a standard example of projection methods in robotics.

117.20 Manipulability

Manipulability measures how easily a robot can move in different task directions.

Given

x˙=Jq˙, \dot{x}=J\dot{q},

suppose joint velocities satisfy

q˙1. \|\dot{q}\|\leq 1.

Then possible end-effector velocities form an ellipsoid:

x˙T(JJT)1x˙1, \dot{x}^T(JJ^T)^{-1}\dot{x}\leq 1,

assuming JJ has full row rank.

The singular values of JJ determine the axes of this ellipsoid.

Large singular values indicate directions where the end effector can move easily. Small singular values indicate directions where motion is difficult.

Near singularity, one or more singular values approach zero.

117.21 Kinematic Calibration

Kinematic calibration estimates unknown geometric parameters from measurements.

A robot may have small errors in link lengths, joint offsets, sensor placement, or tool position.

Calibration problems often lead to equations involving transformations.

For example, robot-world and hand-eye calibration can be written in forms such as

AX=ZB, AX=ZB,

where XX and ZZ are unknown homogeneous transformations. Such calibration is important for vision-guided manipulation because the robot, camera, and world coordinate frames must be related accurately.

These equations are commonly solved using least squares, nonlinear optimization, or iterative methods.

117.22 Mobile Robot Kinematics

A mobile robot moves through a plane or space.

For a planar mobile robot, a pose may be written as

q=[xyθ]. q = \begin{bmatrix} x \\ y \\ \theta \end{bmatrix}.

Here x,yx,y give position and θ\theta gives heading.

A differential-drive robot has two wheels. If the left and right wheel speeds are vLv_L and vRv_R, and the wheel separation is bb, then

v=vR+vL2, v = \frac{v_R+v_L}{2}, ω=vRvLb. \omega = \frac{v_R-v_L}{b}.

The planar motion is

x˙=vcosθ, \dot{x}=v\cos\theta, y˙=vsinθ, \dot{y}=v\sin\theta, θ˙=ω. \dot{\theta}=\omega.

This model is nonlinear because the heading angle appears inside sine and cosine.

117.23 Nonholonomic Constraints

Many mobile robots have constraints on their instantaneous motion.

A wheeled robot may move forward and turn, but it cannot move sideways.

For a unicycle model, the sideways velocity constraint is

sinθx˙+cosθy˙=0. -\sin\theta \,\dot{x} + \cos\theta \,\dot{y} = 0.

This is a nonholonomic constraint.

It restricts velocities but does not simply restrict positions.

Nonholonomic systems require special planning and control methods because not every instantaneous direction is available, even if the robot can eventually reach many positions through sequences of motions.

117.24 State Estimation

Robots use sensors to estimate their state.

The state may include position, velocity, orientation, map features, or object poses.

A linear measurement model has the form

y=Hx+ε. y = Hx + \varepsilon.

Here xx is the state, yy is the measurement, HH is the measurement matrix, and ε\varepsilon is measurement noise.

Estimating xx from yy is often a least squares or probabilistic linear algebra problem.

When the system is dynamic, the state evolves:

xk+1=Axk+Buk+wk. x_{k+1}=Ax_k+Bu_k+w_k.

This leads to Kalman filtering and related recursive estimation methods.

117.25 Control

Control chooses inputs so that a robot follows a desired behavior.

For a linear system,

xk+1=Axk+Buk, x_{k+1}=Ax_k+Bu_k,

the matrix AA describes natural state evolution, and BB describes how controls affect the state.

A state-feedback controller has the form

uk=Kxk. u_k = -Kx_k.

Then the closed-loop system is

xk+1=(ABK)xk. x_{k+1} = (A-BK)x_k.

The eigenvalues of ABKA-BK determine stability and convergence.

Thus control design is closely connected to eigenvalue placement, observability, controllability, and optimization.

117.26 Robotics and Linear Algebra

The main objects of robotics are linear algebraic objects.

Robotics objectLinear algebra object
PositionVector
OrientationRotation matrix
PoseHomogeneous transformation matrix
Joint configurationVector
Forward kinematicsProduct of transformations
Velocity kinematicsJacobian
Singular configurationRank loss
RedundancyNull space
CalibrationMatrix equations
State estimationLeast squares and filtering
ControlState-space matrices
Workspace analysisGeometry of maps
ManipulabilitySingular values

This table explains why robotics is impossible to separate from linear algebra.

117.27 Summary

Robot kinematics describes motion through coordinate frames, transformations, and joint variables.

Forward kinematics computes end-effector pose from joint configuration. Inverse kinematics seeks joint configuration from desired pose. Differential kinematics uses the Jacobian to relate joint velocities to end-effector velocities.

Singularities are rank failures of the Jacobian. Redundancy is expressed through its null space. Manipulability is measured through singular values. Calibration uses matrix equations and least squares. Mobile robots add pose dynamics and nonholonomic constraints.

The central principle is that robotic motion is geometry made computational. Linear algebra supplies the representation, and kinematics supplies the map from robot mechanisms to motion.