Skip to content

Chapter 1. Foundations

This chapter defines how you think about algorithms before you write any code.

This chapter defines how you think about algorithms before you write any code. It establishes a small set of rules for describing problems, reasoning about correctness, and estimating cost. Every later technique relies on these rules, so the focus here stays on precision and repeatability.

An algorithm begins with a contract. You specify inputs, outputs, and constraints in a way that removes ambiguity. This includes data shape, value ranges, ordering guarantees, and failure modes. A clear contract lets you reason locally. Without it, later proofs and optimizations become fragile.

Correctness is treated as a construction, not a claim. You describe an invariant that holds at every step, and you show how each operation preserves it. For iterative algorithms, this takes the form of loop invariants. For recursive algorithms, it becomes a structural argument that each call reduces the problem while maintaining validity. These arguments should be short, mechanical, and checkable.

Cost is measured with simple models. Time complexity captures how running time grows with input size. Space complexity tracks memory usage. You use asymptotic notation to compare algorithms while ignoring constant factors, but you still record constants when they matter in practice. Lower bounds appear early because they tell you when further optimization is impossible.

The chapter also introduces baseline strategies. Brute force provides a reference implementation that is often correct and easy to test. Greedy methods make locally optimal choices and require a proof that local choices compose into a global solution. Divide and conquer splits problems into independent parts and recombines results. Dynamic programming stores intermediate results to avoid recomputation. These are not full techniques yet, but you begin to recognize when each pattern applies.

Edge cases receive explicit attention. Empty inputs, minimal sizes, duplicate values, and extreme ranges often break naive implementations. A disciplined approach treats edge cases as part of the specification, not as afterthoughts.

Testing complements reasoning. You design small cases that exercise invariants, boundary conditions, and failure paths. Random testing and adversarial inputs help reveal incorrect assumptions. A simple baseline algorithm often acts as an oracle for verification.

Finally, the chapter sets expectations for implementation. Pseudocode should reflect the invariant structure of the algorithm. Data representation choices should support the operations you need, not the other way around. Numerical limits, overflow, and precision are considered early to avoid hidden errors.

After this chapter, you should be able to take a problem, define it precisely, select a basic strategy, argue correctness with invariants, and estimate cost with a simple model. These steps form the standard workflow used in all subsequent chapters.

1.1 Problem StatementsAn algorithm begins with a precise statement of the problem.
3 min
1.2 Input and Output ModelsAn algorithm does not operate on an abstract idea of data.
5 min
1.3 Correctness ArgumentsA correctness argument explains why an algorithm returns an acceptable output for every valid input.
5 min
1.4 Loop InvariantsLoop invariants are the primary tool for reasoning about iterative algorithms.
4 min
1.5 Recursion InvariantsRecursive algorithms replace loop structure with self-reference.
3 min
1.6 Time ComplexityTime complexity describes how the running time of an algorithm grows as the input size grows.
4 min
1.7 Space ComplexitySpace complexity measures how much memory an algorithm uses as a function of input size.
3 min
1.8 Big O NotationBig O notation provides a formal way to describe how a function grows.
3 min
1.9 Lower BoundsA lower bound states that every algorithm for a problem must perform at least a certain amount of work in some model of computation.
5 min
1.10 Edge CasesEdge cases are valid inputs that sit at the boundary of the specification.
5 min
1.11 Testing AlgorithmsTesting does not prove an algorithm correct, but it exposes mistakes in specifications, invariants, edge cases, and implementation details.
5 min
1.12 Brute Force BaselinesA brute force baseline is the simplest correct algorithm you can write from the problem statement.
4 min
1.13 Greedy ChoicesA greedy algorithm builds a solution by making one locally best choice at a time.
4 min
1.14 Divide and ConquerDivide and conquer solves a problem by splitting it into smaller subproblems, solving those subproblems, and combining their answers.
4 min
1.15 Dynamic ProgrammingDynamic programming solves problems by storing answers to subproblems and reusing them.
4 min
1.16 RandomizationRandomized algorithms use random choices during execution.
4 min
1.17 Amortized AnalysisAmortized analysis studies the average cost of operations over a sequence, even when individual operations are sometimes expensive.
4 min
1.18 ReductionsA reduction transforms one problem into another problem.
5 min
1.19 Pseudocode StylePseudocode is a bridge between the problem statement and an implementation.
4 min
1.20 Implementation DisciplineImplementation discipline means translating an algorithm into code without changing its meaning accidentally.
4 min
1.21 Data RepresentationData representation is the choice of concrete form used to store the objects in a problem.
4 min
1.22 Numerical LimitsAlgorithms are usually described with mathematical integers and real numbers.
4 min
1.23 Stability and DeterminismStability and determinism describe how predictably an algorithm behaves when there are ties, repeated values, or multiple valid answers.
4 min
1.24 BenchmarkingBenchmarking measures how an implementation behaves on real inputs and real hardware.
4 min
1.25 Common Failure ModesEven when the algorithmic idea is correct, implementations fail in predictable ways.
4 min