Static Array
Store a fixed number of elements in contiguous memory with constant-time indexing.
101 notes
Store a fixed number of elements in contiguous memory with constant-time indexing.
Place array elements at memory addresses that satisfy hardware and type alignment requirements.
Validate array indices before access to prevent invalid reads and writes.
Combine elements from two arrays into one collection without duplicates.
Compute the common elements between two arrays.
Represent range updates compactly by storing changes between adjacent values.
Precompute cumulative sums to answer range sum queries in constant time.
Insert unused space between array elements or groups to control alignment and reduce interference.
Process multiple array elements per instruction using SIMD-friendly layout and loops.
Process multidimensional arrays in small rectangular regions to improve locality.
Process an array in fixed-size blocks to improve cache locality and batch behavior.
Access array elements at regular intervals and understand the effect on locality.
Map logical array positions to physical storage positions.
Use temporary array storage to stage data before writing it to its final location.
Copy array elements into a new storage block using shallow or deep copy semantics.
Understand how arrays are stored in memory and how layout affects performance.
Merge two sorted arrays into a single sorted array.
Remove duplicate values from an array while keeping one representative of each value.
Remove elements in-place based on a predicate while preserving the remaining elements.
Traverse an array sequentially to compute an aggregate or apply a function.
Generate a uniform random permutation of an array in-place.
Reverse the order of elements in an array in-place using symmetric swaps.
Partition an array while preserving the relative order of elements inside each group.
Rearrange elements so that those satisfying a predicate appear before others.
Analyze the average cost per operation over a sequence of dynamic array operations.
Preallocate array capacity to reduce reallocations and copying during growth.
Store small integers or booleans using compact bit-level representation.
Store only non-default values from a large logical array.
Maintain a contiguous window over an array while moving its left and right boundaries.
Rotate elements of an array by a given offset in-place or using auxiliary space.
Represent a contiguous subrange of an array as either a view or a copied array.
Store rows of different lengths as an array of arrays.
Bloom filters, cuckoo filters, count-min sketch, HyperLogLog, MinHash, reservoir sampling, skip lists, and treaps as probabilistic structures.
Balanced trees augmented with subtree sizes, implicit treaps, wavelet-based order statistics, skip lists, median maintenance, and rank-select structures.
B-tree indexes, LSM trees, SSTables, write-ahead logs, compaction strategies, cache-oblivious structures, inverted indexes, and external sort/merge.
Trie variants including compressed, Patricia, ternary, XOR, suffix trees, Aho-Corasick, DAWG, HAMT, and concurrent prefix structures.
Mutex-protected structures, lock-free stacks, queues, hash tables, skip lists, CAS primitives, hazard pointers, epoch reclamation, and RCU.
Union-find with path compression and union by rank, rollback, parity, weighted, offline, and concurrent DSU designs.
KD-trees, R-trees, quadtrees, octrees, Delaunay triangulation, Voronoi diagrams, HNSW, range trees, and sweep line structures.
Array and linked queues, deques, monotonic queues, priority buckets, delay queues, and concurrent queue designs.
Bit vectors, rank-select, succinct trees, FM-index, wavelet trees, Huffman and arithmetic coding, roaring bitmaps, and compressed range queries.
Persistence, functional structures, succinct and compressed encodings, geometric indexing, concurrent and lock-free designs, external-memory structures, and probabilistic data structures.
Array and linked stack implementations covering push, pop, monotonic stacks, expression evaluation, and thread-safe variants.
Immutable lists, stacks, queues, heaps, HAMTs, functional BSTs, lazy evaluation, structural sharing, and deforestation.
Augmented and self-balancing trees, range query structures, Fenwick and segment tree variants, union-find, tries, and order statistic structures.
Singly, doubly, and circular linked lists with operations for insertion, reversal, cycle detection, sorting, and memory pooling.
Three-book reference covering core, advanced, and specialized data structures from arrays to concurrent and persistent designs.
Path copying, fat nodes, persistent arrays, segment trees, treaps, heaps, union-find, functional queues, and distributed persistence models.
Fixed and dynamic arrays, resizing strategies, memory layout, cache effects, vectorization, and in-place operations.
Memory layout, invariants, and implementation details for arrays, linked lists, stacks, queues, heaps, hash tables, binary search trees, and traversal operations.
General tree traversal patterns including DFS, BFS, level-order, zigzag, boundary traversal, LCA, serialization, and iterative approaches.
BST operations, traversals, augmentation, interval trees, order statistics, persistence, and thread-safe BST designs.
Hash functions, collision resolution strategies, open addressing, chaining, dynamic resizing, perfect hashing, concurrent maps, and probabilistic filters.
Binary, d-ary, Fibonacci, pairing, soft heaps, double-ended structures, tournament trees, and concurrent priority queue designs.
Segment tree construction, lazy propagation, beats, 2D variants, persistent and dynamic trees, and hybrid structures combining multiple techniques.
Fenwick tree construction, point and range updates, multidimensional extensions, order statistics, inversion counting, and persistent variants.
Sparse tables, sqrt decomposition, Mo's algorithm, wavelet trees, merge sort trees, 2D range trees, and offline range query techniques.
AVL, red-black, splay, scapegoat, B-tree families, treaps, finger trees, persistent balanced trees, and cache-oblivious layouts.
Store values indexed by two or more dimensions using a linear memory layout.
Array that wraps indices modulo capacity to support efficient cyclic access.
Resizable array that grows and shrinks automatically while preserving contiguous storage.
Tree-based and indexed search: BST, self-balancing trees, tries, suffix structures, segment and range trees, and spatial trees.
Maintain a dynamically ordered sequence under insertions and comparisons without fully re-sorting.
Speed up repeated binary searches across related sorted lists by linking sampled elements between lists.
Combine hash tables with other data structures to handle skewed distributions, heavy deletions, and mixed workloads.
Remove duplicate entries from a dataset or stream efficiently using hash sets for membership tracking.
Approximate frequency counting for large key streams using a compact probabilistic data structure with bounded error.
Test set membership approximately using a compact bit array and multiple hash functions, with no false negatives and bounded false positives.
Partition a collection into groups by key using a hash map that accumulates values into per-key lists or sets.
Build a hash-based map that associates keys with values and supports insert, lookup, delete, and update in expected constant time.
Implement a hash-based set for fast membership testing, insertion, and deletion without associated values.
Resolve hash collisions using separate chaining or open addressing, with trade-offs in memory, locality, and load tolerance.
A data structure that supports fast insertion, lookup, and deletion by mapping keys to bucket positions via a hash function.
This section combines multiple patterns from the chapter into complete, end-to-end linked list algorithms.
Linked list algorithms are dominated by pointer traversal and constant-time link updates.
Pointer code should be tested by checking structure, not only values.
Edge cases are inputs that sit near the boundary of an algorithm's assumptions.
An LRU cache stores a fixed number of key-value entries and removes the least recently used entry when capacity is exceeded.
A queue is a first-in, first-out structure.
A stack is a last-in, first-out (LIFO) structure.
An iterator is an object or procedure that visits the nodes of a linked list one at a time.
Memory ownership describes which part of a program is responsible for creating, linking, unlinking, and destroying a node.
An intrusive list stores the linkage fields inside the objects being linked.
A skip list augments a sorted linked list with multiple levels of forward pointers.
A persistent list is a list that preserves older versions after an update.
Pointer aliasing occurs when two or more references point to the same node.
A dummy head is a fixed node placed before the real head of a singly linked list.
Insertion adds nodes into a linked list by creating new links while preserving reachability of all existing nodes.
Deletion removes one or more nodes from a linked list by changing links around them.
Merging is a family of constructions that combine multiple linked lists into one or more output lists while preserving structural invariants.
Splitting a linked list means cutting one list into two or more lists while preserving the original nodes.
Merging combines two sorted singly linked lists into one sorted list by relinking nodes.
Fast and slow pointers are two references that traverse the same linked structure at different speeds.
A cycle exists in a linked list when some node’s `next` pointer eventually leads back to a previously visited node.
Reversal transforms a linked list so that the direction of all edges is flipped.
A sentinel node is an artificial node placed at the boundary of a linked list.
A doubly linked list is a sequence of nodes where each node stores a value, a reference to the next node, and a reference to the previous node.
A singly linked list is a sequence of nodes where each node stores a value and a reference to the next node.
Linked lists are the first data structure in this book where the shape of the data matters as much as the values stored inside it.
Sorting algorithms from fundamental contracts through comparison-based, linear-time, and specialized variants, with selection, external sorting, and correctness analysis.
Hash tables, hash functions, collision handling, probabilistic structures, and practical design for constant-time key-value access.