LeetCode
LeetCode practice notes exported from ChatGPT lessons.
538 notes
LeetCode practice notes exported from ChatGPT lessons.
LeetCode practice notes for problems 900 through 999, including A clear explanation of designing an iterator over a run-length encoded sequence without expanding it.
LeetCode practice notes for problems 800 through 899, including A clear explanation of finding the closest shorthand RGB color by rounding each color channel to the nearest repeated hexadecimal pair.
LeetCode practice notes for problems 1000 through 1000, including A clear explanation of merging consecutive stone piles with minimum cost using interval dynamic programming.
LeetCode practice notes for problems 700 through 799, including Search for a target value in a binary search tree and return the subtree rooted at the matching node.
LeetCode practice notes for problems 1 through 99, including A clear explanation of the Two Sum problem using brute force first, then an optimized hash map solution.
LeetCode practice notes for problems 200 through 299, including A clear explanation of counting connected groups of land cells in a grid using DFS or BFS.
LeetCode practice notes for problems 100 through 199, including A detailed guide to solving Same Tree with recursive DFS and structural comparison.
LeetCode practice notes for problems 600 through 699, including A clear digit dynamic programming solution for counting numbers whose binary representation does not contain consecutive ones.
LeetCode practice notes for problems 500 through 599, including A clear explanation of filtering words that can be typed using only one row of an American keyboard.
LeetCode practice notes for problems 400 through 499, including A clear explanation of finding the nth digit in the infinite integer sequence using digit groups and arithmetic.
LeetCode practice notes for problems 300 through 399, including A dynamic programming and patience sorting solution for finding the longest strictly increasing subsequence in an array.
Programming practice notes and algorithm problem explanations.
A practical cookbook of algorithmic patterns, correctness arguments, and implementation techniques.
Find intersection of two sorted sequences using exponential jumps followed by binary search.
Use interpolation to estimate the target position, with binary search fallback when estimates become unreliable.
Search several array elements at once using vector instructions.
Arrange and search sorted data with explicit knowledge of cache lines or memory blocks.
Search a binary tree stored in recursive cache oblivious layout to reduce memory transfers.
Binary search over an array arranged in heap order to improve cache locality and branch predictability.
Perform binary search with conditional moves or arithmetic updates instead of unpredictable branches.
Use a hierarchy of learned models to predict a key position, then search inside a bounded error range.
Construct a suffix array by sorting rank pairs with radix or counting sort during the doubling method.
String sorting method that groups strings by prefixes in a trie and sorts leaf buckets for cache efficient lexicographic order.
Cache efficient string sorting algorithm that builds a trie and sorts buckets lazily for high performance on large text data.
Sort fixed width keys by processing digits from least significant to most significant using a stable inner sort.
Sort integer or string keys by processing one digit, byte, or character position at a time.
Find the k-th smallest sum among all contiguous subarrays, usually for nonnegative arrays.
Find the k-th smallest absolute difference among all pairs in an array.
Select the k-th smallest element from a matrix whose rows and columns are sorted.
Answer offline connectivity queries by combining parallel binary search with a disjoint set union.
Convert an optimization problem into a decision problem and search the parameter space.
Bloom filters, cuckoo filters, count-min sketch, HyperLogLog, MinHash, reservoir sampling, skip lists, and treaps as probabilistic structures.
Balanced trees augmented with subtree sizes, implicit treaps, wavelet-based order statistics, skip lists, median maintenance, and rank-select structures.
B-tree indexes, LSM trees, SSTables, write-ahead logs, compaction strategies, cache-oblivious structures, inverted indexes, and external sort/merge.
Trie variants including compressed, Patricia, ternary, XOR, suffix trees, Aho-Corasick, DAWG, HAMT, and concurrent prefix structures.
Mutex-protected structures, lock-free stacks, queues, hash tables, skip lists, CAS primitives, hazard pointers, epoch reclamation, and RCU.
Union-find with path compression and union by rank, rollback, parity, weighted, offline, and concurrent DSU designs.
KD-trees, R-trees, quadtrees, octrees, Delaunay triangulation, Voronoi diagrams, HNSW, range trees, and sweep line structures.
Array and linked queues, deques, monotonic queues, priority buckets, delay queues, and concurrent queue designs.
Bit vectors, rank-select, succinct trees, FM-index, wavelet trees, Huffman and arithmetic coding, roaring bitmaps, and compressed range queries.
Persistence, functional structures, succinct and compressed encodings, geometric indexing, concurrent and lock-free designs, external-memory structures, and probabilistic data structures.
Array and linked stack implementations covering push, pop, monotonic stacks, expression evaluation, and thread-safe variants.
Immutable lists, stacks, queues, heaps, HAMTs, functional BSTs, lazy evaluation, structural sharing, and deforestation.
Augmented and self-balancing trees, range query structures, Fenwick and segment tree variants, union-find, tries, and order statistic structures.
Singly, doubly, and circular linked lists with operations for insertion, reversal, cycle detection, sorting, and memory pooling.
Three-book reference covering core, advanced, and specialized data structures from arrays to concurrent and persistent designs.
Path copying, fat nodes, persistent arrays, segment trees, treaps, heaps, union-find, functional queues, and distributed persistence models.
Fixed and dynamic arrays, resizing strategies, memory layout, cache effects, vectorization, and in-place operations.
Memory layout, invariants, and implementation details for arrays, linked lists, stacks, queues, heaps, hash tables, binary search trees, and traversal operations.
General tree traversal patterns including DFS, BFS, level-order, zigzag, boundary traversal, LCA, serialization, and iterative approaches.
BST operations, traversals, augmentation, interval trees, order statistics, persistence, and thread-safe BST designs.
Hash functions, collision resolution strategies, open addressing, chaining, dynamic resizing, perfect hashing, concurrent maps, and probabilistic filters.
Binary, d-ary, Fibonacci, pairing, soft heaps, double-ended structures, tournament trees, and concurrent priority queue designs.
Segment tree construction, lazy propagation, beats, 2D variants, persistent and dynamic trees, and hybrid structures combining multiple techniques.
Fenwick tree construction, point and range updates, multidimensional extensions, order statistics, inversion counting, and persistent variants.
Sparse tables, sqrt decomposition, Mo's algorithm, wavelet trees, merge sort trees, 2D range trees, and offline range query techniques.
AVL, red-black, splay, scapegoat, B-tree families, treaps, finger trees, persistent balanced trees, and cache-oblivious layouts.
Cache-efficient and hardware-tuned search: branchless binary search, Eytzinger and vEB layouts, learned indexes, SIMD search, and galloping intersection.
High-throughput sorting: parallel merge and quicksort, GPU bitonic and radix sort, SIMD sorting networks, and distributed partition-based sorts.
Sorting beyond main memory: external merge sort, polyphase merge, cache-oblivious techniques, LSM compaction, and database sort phases.
Sorting under constraints: partial sort, online insertion, adaptive merge, Timsort internals, and lazy or order-maintenance sorting.
Non-comparison sorts: counting, radix, and bucket sort families, string sorting, and suffix array construction.
Divide-and-conquer sorts: merge sort variants, quicksort family, heapsort, and comparison-based sorting networks.
Elementary comparison-based sorts: bubble, selection, insertion, Shell, comb, cycle, and curiosity sorts.
Selection and order-statistics: quickselect, median-of-medians, streaming top-k, heavy hitters, reservoir sampling, and quantile summaries.
Tree-based and indexed search: BST, self-balancing trees, tries, suffix structures, segment and range trees, and spatial trees.
Hash table lookup strategies, open-addressing and chaining variants, probabilistic filters (Bloom, Cuckoo, XOR), and locality-sensitive similarity search.
Binary search and its ordered-search relatives: interpolation, Fibonacci, rotated-array, matrix, parametric, and offline variants.
Linear and sequential scan techniques including sentinel, bounded, recursive variants, duplicate detection, and majority vote.
Searching and sorting techniques spanning sequential and parallel algorithms, from linear scan to GPU-based distribution sort.
Expand the search range exponentially until the target is bracketed, then refine within the range.
Find diagonal partition points that split a sorted merge into independent balanced ranges.
Use vector instructions to evaluate multiple candidate positions in a binary search step.
Compute lower bound using arithmetic or conditional moves instead of branches.
Use a predictive model to estimate where a key should appear, then search within a bounded local range.
Estimate the likely position using interpolation, then refine by local sequential scan.
Search starting from a known nearby position, achieving time proportional to distance rather than full size.
Sort a two dimensional mesh by alternating row sorts and column sorts until the grid is globally ordered.
Sort distributed keys across hypercube processors using dimension based compare and exchange stages.
Sort data in parallel by local sorting, regular sampling, global splitter selection, redistribution, and final local merge.
Sort distributed data by assigning fixed key ranges to workers, routing records by range, then sorting locally.
Sort data across machines by sampling keys, choosing splitters, redistributing records into ordered buckets, and sorting buckets locally.
Sort very large datasets with range partitioning, distributed shuffle, and reducer local sorting.
Sort large distributed datasets by partitioning records by key range, sorting partitions locally, and writing ordered output shards.
Merge a bitonic sequence into a sorted sequence using a fixed comparator network.
Sort a fixed size sequence using a data independent network of odd even compare exchange stages.
Sort a fixed size sequence with a bitonic compare and exchange network whose schedule is independent of input values.
Sort a fixed size vector by mapping a data independent sorting network to SIMD compare, shuffle, and blend operations.
Sort small vectors using SIMD registers with a fixed bitonic compare exchange network.
Sort a block sized tile on the GPU using shared memory and cooperative threads.
Sort a small set of values inside one CUDA warp using shuffle operations and compare exchange steps.
Sort data on a GPU by selecting splitters, partitioning into buckets in parallel, then sorting buckets independently.
Sort data on a GPU by recursively merging sorted runs using parallel merge primitives.
Sort fixed width keys on a GPU using digit passes, parallel histograms, prefix sums, and scatter operations.
Sort data on a GPU using a regular bitonic compare and exchange network.
Count key frequencies in parallel, compute prefix sums, then scatter elements into sorted positions.
Sort an array by repeatedly comparing odd indexed and even indexed adjacent pairs in parallel.
Sort data by building and merging bitonic sequences through a regular compare and exchange network.
Sort integer keys by processing fixed width digit groups in parallel using counting and prefix sums.
Choose splitters from samples, partition the input into buckets, sort buckets independently, then concatenate the sorted buckets.
Partition the array around a pivot, then sort partitions concurrently using parallel recursion.
Divide the input, sort subarrays in parallel, then merge them using parallel merge procedures.
K-way merge using a tournament tree that stores winners at internal nodes and repeatedly updates the winning path.
Efficient k-way merge using a tournament tree that stores losers to reduce comparisons and improve external merge performance.
External-memory bucket sort that partitions records into ordered buckets stored on disk and sorts each bucket separately.
External-memory sorting algorithm that uses sampling to partition data into balanced buckets, then sorts each bucket independently.
External-memory radix sort that processes large datasets by distributing records into digit-based buckets across multiple passes.
External sorting algorithm that performs run generation and merging concurrently across multiple processors and disks.
Sorting method that keeps data in memory until a memory limit is reached, then spills sorted runs to external storage and merges them.
Sorting method that divides input into bounded-size chunks, sorts each chunk independently, and then combines the chunks.
External sorting method that uses memory-mapped files so large data can be accessed through virtual memory while still sorting in chunks and merging runs.
External sorting algorithm that distributes sorted runs across sequential storage tapes and repeatedly merges them.
External sorting technique that uses a heap to produce initial sorted runs longer than the available memory.
The sorting stage used before a sort-merge join when one or both inputs are not already ordered by the join key.
External sorting procedure used by database systems for ORDER BY, GROUP BY, DISTINCT, and sort-merge joins.
Sorting through hierarchical merging in Log Structured Merge Trees using compaction across levels.
Sorting method that builds a B-tree directly from sorted data using bulk loading to minimize I/O operations.
External-memory sorting method that inserts records into a buffered tree and extracts them in sorted order.
Cache oblivious sorting algorithm that achieves optimal memory transfer complexity using recursive multiway merging structures called funnels.
Sorting techniques that achieve good cache behavior without explicitly knowing cache size or block size.
Sorting techniques that explicitly use knowledge of cache size and block size to minimize memory hierarchy cost.
External-memory sorting method that partitions data into key ranges, sorts each range, and concatenates the sorted partitions.
External sorting method that merges sorted runs through staged levels so output from one level feeds the next.
External sorting algorithm that repeatedly merges groups of k sorted runs until one final sorted run remains.
External sorting algorithm that distributes sorted runs unevenly across files so repeated merges need fewer empty files and less redistribution.
External sorting algorithm that performs run generation and a single multiway merge using limited memory.
Sort data that does not fit in memory by dividing it into sorted runs and merging them using external storage.
Maintain a dynamically ordered sequence under insertions and comparisons without fully re-sorting.
Select the next smallest element on demand, avoiding full sorting until all elements are requested.
Delay sorting work until ordered output or ordered queries are actually needed.
Use a tournament tree to extract the smallest or largest k elements in sorted order without fully sorting the input.
Sort by building a Cartesian tree that captures local order, then extracting elements in sorted order.
Sort in place with a heap-like structure that keeps worst-case O(n log n) time while approaching linear time on nearly sorted input.
A heap-based sorting method that adapts to partially ordered input to reduce unnecessary heap operations.
Generate long initial runs for external sorting using replacement selection with disk-based streams.
Maintain the median of a sliding window using two heaps while the window moves over a stream.
Maintain sorted order over a moving window of the most recent elements in a stream.
Sort a stream in bounded chunks by buffering records, sorting each chunk, and emitting sorted runs or partially ordered output.
Maintain a sorted sequence by inserting elements into a binary search tree as they arrive and extracting them in order.
Optimize insertion sort by placing the minimum element at the front so the inner loop does not need a bounds check.
Maintain a valid topological ordering of a directed acyclic graph as vertices or edges are added over time.
Sort a sequence by building piles using patience sorting and then merging them.
An adaptive merge-based sorting algorithm that merges natural runs using stack rules designed to minimize merge cost.
Partition data by sampled splitters, while adapting splitter choice to skewed or partially ordered input.
Accelerate merging by switching from one-by-one comparison to exponential search when one run wins repeatedly.
Detect natural runs in input and normalize them for efficient merging in Timsort.
Sort a k-sorted array using a sliding min heap window of size k+1.
Sort an array where every element is at most k positions away from its final sorted position.
Sort an almost ordered sequence efficiently by using insertion sort, whose running time improves when few elements are far from their final positions.
Generate long sorted runs from a stream using a heap, commonly used in external sorting.
Use patience sorting to compute the length and structure of the longest increasing subsequence efficiently.
Sort a sequence by detecting already sorted runs and repeatedly merging them.
A merge sort variant that exploits existing order in the input by detecting natural runs and reducing work.
Sort a sequence as elements arrive by inserting each new item into its correct position among the items already seen.
Maintain a sorted sequence while elements are inserted one by one, updating order incrementally.
Extract and sort the top k elements (smallest or largest) from a sequence without fully sorting it.
Rearrange a sequence so that the smallest k elements appear in sorted order, without fully sorting the entire sequence.
Radix sort for signed integers by transforming values to preserve ordering across negative and positive ranges.
Radix sort for floating-point numbers by transforming their bit representation to preserve numeric order.
Difference cover modulo 7 suffix array construction method that generalizes DC3 with a larger recursive sample.
Difference cover modulo 3 algorithm for linear-time suffix array construction using radix sorting and merging.
Linear-time suffix array construction using the DC3 algorithm that recursively sorts positions modulo 3.
Linear-time suffix array construction algorithm based on induced sorting of LMS suffixes.
Linear-time technique for suffix array construction that induces order of suffixes from a subset of sorted suffixes.
LSD radix sort variant that processes keys byte by byte from least significant to most significant.
Radix sort that processes fixed-width keys one byte at a time using 256 counting buckets.
Radix sort specialized for fixed-width machine words using byte or bit extraction for high throughput.
Counting sort variant that stores counts only for keys that appear, avoiding dense arrays over large key ranges.
Sort values by mapping them to a dense rank space and ordering by compressed indices.
Extend counting sort to handle negative integers by shifting keys into a nonnegative index range.
Construct a suffix array by iteratively sorting suffixes using doubling technique with rank pairs.
String sorting algorithm that uses three-way partitioning on characters to efficiently handle repeated prefixes.
String sorting algorithm that partitions strings by the character at the current depth using three-way quicksort.
In-place MSD radix string sort that partitions strings by character using American flag style bucket permutation.
MSD radix sort variant that partitions keys by the most significant byte and recursively sorts subarrays.
Sample sort specialized for integer keys using range-based partitioning and optional radix-style bucket classification.
In-place parallel super scalar samplesort for high performance comparison sorting on modern multicore machines.
High performance radix sort variant using cache friendly block partitioning and branchless techniques.
In place binary radix sort that partitions keys by bits from most significant to least significant.
Hybrid distribution and comparison sort that adapts between radix-like partitioning and comparison sorting.
Distribution sorting algorithm that approximates uniform distribution and then refines with insertion sort.
Sort integer keys by placing each value into a direct address slot over a small contiguous range.
Sort elements by building a frequency histogram and reconstructing the output from cumulative counts.
Bucket sort variant that assumes uniform distribution and uses equal-width buckets with simple mapping.
Distribute elements into buckets based on value ranges and sort each bucket independently.
Sort integers by processing bits directly using partitioning instead of counting arrays.
Sort integers by counting occurrences of each key and reconstructing the output in linear time.
Sort digit based keys by redistributing elements inside the original array with only small auxiliary bucket state.
In place radix sort that partitions elements by digit using cycle leader permutation.
Sort keys by processing the most significant digit first and recursively sorting subarrays.
Counting-based stable sorting technique that maps keys to index ranges using frequency and prefix sums.
Stable variant of counting sort using prefix sums to preserve relative order of equal keys.
Merge-based sorting algorithm that uses a tournament tree to repeatedly select the next smallest element.
Merge sort variant that merges more than two sorted sequences at each step.
Divide and conquer sorting algorithm that uses sampling to partition data into balanced buckets.
Merge sort variant that improves locality without tuning parameters for a specific cache size.
Merge sort variant designed to improve cache locality by structuring recursion and merging to fit memory hierarchies.
Sorting network algorithm based on recursively sorting halves and merging them with Batcher's odd even merge network.
Sorting network that sorts by repeatedly applying pairwise compare and swap stages.
Sorting network algorithm that recursively merges sorted sequences using odd even comparisons.
Comparison sorting algorithm that sorts a bitonic sequence using structured compare and swap operations.
Comparison optimal sorting algorithm that minimizes comparisons using a structured insertion schedule.
Comparison-optimal sorting algorithm that combines merging and insertion guided by a structured schedule.
Heapsort variant based on weak heaps, reducing the number of comparisons while retaining in-place sorting.
Heapsort variant that uses a ternary heap with three children per node.
Heapsort variant that uses bottom up sift down to reduce comparisons during heapify.
Comparison sorting algorithm that builds a heap and repeatedly extracts the maximum.
Quicksort variant using the Lomuto single-index partition scheme.
Quicksort variant that uses Hoare's two pointer partition scheme.
Quicksort variant that selects the pivot using Tukey's ninther, the median of three medians of three.
Quicksort variant that chooses the pivot as the median of the first, middle, and last elements.
Quicksort variant that preserves the relative order of equal elements.
Quicksort variant that processes elements in blocks to reduce branch misprediction and improve cache efficiency.
Quicksort variant that detects and avoids bad input patterns using adaptive partitioning and pivot selection.
Hybrid sorting algorithm that starts with quicksort and falls back to heapsort when recursion becomes too deep.
Quicksort variant that partitions the array using two pivots into three regions.
Quicksort variant that partitions into less than, equal to, and greater than pivot.
Quicksort variant that selects pivots randomly to avoid worst case patterns.
Divide and conquer sorting algorithm that partitions the array around a pivot and recursively sorts both sides.
Stable adaptive merge sort that uses power based ordering of runs for near optimal merging.
Adaptive stable sorting algorithm that combines merge sort with run detection and insertion sort.
Stable merge sort variant that uses block rearrangement and a small buffer to reduce auxiliary memory.
Merge sort variant that reduces auxiliary memory by merging sorted ranges inside the original array.
Adaptive merge sort that detects existing sorted runs and merges them.
Iterative merge sort that builds sorted runs from size 1 upward without recursion.
Recursive merge sort that splits the array from the top and merges sorted halves.
Divide and conquer sorting algorithm that splits the array, sorts recursively, and merges in linear time.
A simple quadratic sorting algorithm that compares all pairs and swaps out-of-order elements.
Repeatedly place all occurrences of the current minimum value before moving to the next distinct value.
Sort by building a Cartesian tree and extracting elements via inorder traversal.
A variant of heapsort using weak heaps with fewer comparisons.
An adaptive in-place comparison sort based on Leonardo heaps.
Sort by repeatedly selecting the minimum element using a tournament tree.
Insert all elements into a binary search tree, then traverse the tree in sorted order.
Sort by building piles using patience card game rules and then merging them.
An insertion-based sorting algorithm that leaves gaps between elements to reduce shifting.
Extract increasing subsequences and merge them to form a sorted sequence.
Sort non-negative integers by simulating gravity on beads arranged in columns.
A deliberately inefficient recursive sorting algorithm based on divide and conquer followed by maximum placement.
A recursive sorting algorithm with extremely poor performance based on overlapping subproblems.
Sort by repeatedly flipping prefixes to move the maximum element to its correct position.
Minimizes writes by placing each element directly into its correct position using cycles.
A bubble-sort variant that compares elements at a shrinking gap to remove long-distance inversions early.
Shell sort using Sedgewick gap sequence for improved practical performance.
Shell sort using Hibbard gap sequence 1, 3, 7, 15, ..., improving over simple halving.
Shell sort using the original gap sequence n/2, n/4, ..., 1.
Generalization of insertion sort that compares elements at a gap, reducing long-distance inversions early.
Insertion sort that uses binary search to find the insertion position, reducing comparisons.
Build a sorted sequence by inserting each element into its correct position.
Select both minimum and maximum elements in each pass and place them at the beginning and end.
A stable variant of selection sort that preserves the relative order of equal elements by shifting instead of swapping.
Repeatedly select the minimum element from the unsorted portion and place it at the beginning.
A simple sorting algorithm that moves elements backward when out of order, similar to insertion sort with swaps.
A variation of bubble sort that alternates between comparing odd-even and even-odd index pairs.
Repeatedly swap adjacent elements that are out of order until the sequence becomes sorted.
A bidirectional variant of bubble sort that alternates passes from left to right and right to left.
Bubble sort with early termination when no swaps occur, reducing unnecessary passes on nearly sorted data.
Maintain deterministic approximate quantiles of a stream with rank error guarantees.
Maintain approximate quantiles of a stream using compact summaries.
Use sampling to approximate order statistics and refine selection efficiently.
Select k distinct integers uniformly from a fixed range without shuffling the whole range.
Select a random sample from a stream where each item has a nonnegative weight.
Select a uniform random sample from a stream of unknown length using fixed memory.
Maintain the k-th largest value while values arrive one at a time.
Select the k-th smallest fraction formed by pairs from a sorted array using heap or binary search.
Select the k-th smallest element from two sorted arrays without merging them.
Select multiple order statistics in one pass more efficiently than repeated selection.
Select the upper median, the element at index floor(n/2), using selection or partitioning.
Select the lower median, the element at index floor((n-1)/2), using selection or partitioning.
Select an element whose total weight on each side is at most half of the total weight.
Compute the sorted-order rank of a key in a search tree augmented with subtree sizes.
Select the element with a given rank from a balanced search tree augmented with subtree sizes.
Maintain the median of a stream using a max heap and a min heap.
Select the k-th smallest element in an unsorted array using partitioning.
Estimate item frequencies in a stream using a compact hash based summary.
Approximate frequent items in a stream with tighter error bounds than Misra Gries.
Find frequent items in a stream by maintaining a bounded set of counters.
Find frequent items in a stream using bounded memory.
Maintain the k largest elements while values arrive one at a time.
Select the k smallest elements using bounded structures or partitioning.
Find the k largest elements using partition-based selection in linear time.
Find the k largest or k smallest elements by maintaining a bounded heap.
Rearrange an array so that the element at position k is the same as in sorted order, with partition guarantees.
Select the k-th element by partially sorting only the needed prefix of the array.
Select the k-th smallest or largest element by maintaining a heap of candidate values.
Hybrid selection algorithm that combines Quickselect with worst-case fallback.
Choose a pivot with guaranteed quality by taking the median of group medians.
Selection algorithm with guaranteed linear time using structured pivot choice.
Quickselect with random pivot selection to achieve robust expected linear time.
Search for points or regions in 3D space using recursive octant decomposition.
Linear time suffix array construction using induced sorting of LMS substrings.
Search for points or regions in 2D space using recursive quadrant decomposition.
Search for spatial objects using hierarchical minimum bounding rectangles.
Nearest neighbor search in metric space using hierarchical covers with exponential scale separation.
Nearest neighbor search in metric space using vantage point partitioning.
Search for nearest neighbors in metric space using hierarchical clustering with hyperspheres.
Search for nearest or range points in k-dimensional space using recursive axis splitting.
Search for all points within a multidimensional range using layered balanced search trees.
Search for all intervals that overlap a query interval using a balanced interval tree.
Generalized divide and conquer suffix array construction using modulo 7 partitioning.
Linear time suffix array construction using the DC3 divide and conquer strategy.
Build a suffix array using the prefix doubling technique with iterative ranking.
Search for a pattern in a suffix array using LCP information to reduce redundant character comparisons.
Search for a pattern in a text by binary searching the lexicographically sorted suffix array.
Search for a pattern in a text using a suffix tree in time proportional to the pattern length.
Search for a string key in a ternary search tree using character comparisons and three-way branching.
Search for a key in a Patricia trie using bitwise branching with path compression.
Search for a string or byte key in a radix tree using variable length edge labels.
Search for a string key in a trie whose unary paths are compressed into edge labels.
Search for a string key in a prefix tree by following character transitions.
Search for an integer key using a Y fast trie, combining hashing with balanced trees for linear space.
Search for an integer key in an X fast trie using hashed prefixes over a binary trie.
Search for an integer key in a van Emde Boas tree using recursive universe decomposition.
Search in a B* tree, a B tree variant that keeps nodes denser by redistributing entries before splitting.
Search in a B+ tree where all records are stored in leaves and internal nodes guide traversal.
Search for a key in a multiway balanced search tree optimized for block based storage.
Search in a binary search tree that keeps subtree sizes balanced to guarantee logarithmic height.
Search in a weight balanced binary search tree that rebuilds subtrees to preserve logarithmic height.
Search in a self-adjusting binary search tree that moves accessed nodes toward the root.
Search for a key in a binary search tree by exploiting ordering between nodes.
Search in a binary search tree maintained with randomization to achieve expected logarithmic height.
Search in a randomized binary search tree that also maintains heap order over priorities.
Search in a balanced binary search tree with relaxed balancing using color properties.
Search in a height balanced binary search tree with guaranteed logarithmic depth.
Search a sorted array by repeatedly halving the search interval.
Find the largest index at which a target value appears in a sequence.
Find the smallest index at which a target value appears in a sequence.
Linear search expressed recursively by reducing the problem size one element at a time.
Retrieve rows by probing a hash-based index structure using an exact key match.
Find matching records during a join by probing a hash table built from one relation.
Search for a pattern in text by comparing rolling hash values before verifying matching windows.
Resolve hash collisions by storing multiple elements in buckets using linked structures.
Retrieve a value by computing a hash and probing a table for the corresponding key.
Linear search restricted to a specified index range within a sequence.
Scan a sequence from right to left to find the last occurrence of a target value.
Linear search optimized by placing a sentinel value to eliminate boundary checks inside the loop.
Scan a sequence from left to right until the target value is found or the sequence ends.
Search for a pattern in a sequence by maintaining a hash over a sliding window.
Estimate set similarity and find similar items using compact MinHash signatures and banding.
Find near duplicate documents by comparing compact SimHash fingerprints under Hamming distance.
Search for approximate nearest neighbors by hashing similar items into the same buckets with high probability.
Map a key to the node that receives the highest hash score for that key.
Map a key to a node by placing both on a hash ring and selecting the next node clockwise.
Test approximate membership using a compact XOR-based filter with improved construction and cache locality.
Test approximate membership with a static fingerprint filter built from three hash locations and XOR constraints.
Test approximate membership by storing compact fingerprints in cuckoo hash table buckets.
Test approximate membership by splitting a hash fingerprint into quotient and remainder fields.
Test membership with a Bloom filter variant that supports deletions using small counters instead of bits.
Test set membership with a compact probabilistic bit array that allows false positives but no false negatives.
Look up a key using a collision free hash function that maps a static key set to exactly n table positions.
Look up a key in a static hash table whose hash function has no collisions for the stored key set.
Search in an open addressing hash table that keeps probe distances balanced across keys.
Search in a hash table that maintains elements within a small neighborhood of their home bucket.
Look up a key in a cuckoo hash table by checking a small fixed set of candidate positions.
Resolve hash collisions using a second hash function to define probe steps.
Resolve hash collisions by probing with quadratic offsets to reduce clustering.
Resolve hash collisions by scanning sequential slots until the key is found or an empty slot appears.
Find the last position where a monotone predicate remains true.
Find the range of indices containing all occurrences of a value in a sorted array.
Find the first index where elements become strictly greater than a given value.
Compute the floor of the square root of a nonnegative integer using binary search.
Find the boundary where a Boolean predicate changes value over an ordered domain.
Find a root of a continuous function by repeatedly halving an interval where the function changes sign.
Answer many monotone queries by processing them together after all input is known.
Answer many offline queries by searching their answers simultaneously with shared checks.
Search for a boundary by trying decreasing jumps whose lengths are powers of two.
Find a target state by jumping through powers of two in a precomputed lifting table.
Speed up repeated binary searches across related sorted lists by linking sampled elements between lists.
Search a matrix sorted by rows and columns using divide and conquer.
Search a matrix sorted by rows and columns using a monotone path from a corner.
Search a row-major sorted matrix by treating it as a virtual sorted array.
Find a peak element in a 2D grid using divide and conquer.
Find an element that is at least as large as its immediate neighbors.
Search a value in an array that first increases and then decreases.
Find the smallest element in a sorted array that has been rotated around an unknown pivot.
Search a value in a sorted array that has been rotated around an unknown pivot.
Find the maximum or minimum of a unimodal function over a continuous domain.
Find the maximum or minimum of a unimodal function defined on discrete indices.
Search a sorted array by jumping in fixed steps, then performing a linear scan within a block.
Search a sorted array using Fibonacci numbers to divide the range.
Binary search variant that uses precomputed step sizes instead of dynamic midpoint calculation.
Search a sorted numeric array by estimating the likely position of the target from its value.
Expand the search range exponentially from a starting position, then apply binary search.
Locate a search interval by exponential growth, then apply binary search.
Find an optimal value by binary searching a monotone feasibility condition.
Find the first position where a monotone predicate becomes true.
Find the first position where a value can be inserted without violating sorted order.
Binary search implemented using recursion over a shrinking interval.
Binary search implemented using a loop without recursion.
Check whether a value exists in an unsorted sequence.
Find an index where an element is smaller than its neighbors using a linear scan.
Find the second largest element in a sequence.
Find the second smallest element in a sequence.
Find a majority element in linear time using cancellation.
Find a candidate element that may appear more than half the time using a linear scan.
Count the number of occurrences of a target value in a sequence.
Detect whether a sequence contains duplicate elements using a direct scan.
Find both minimum and maximum with fewer comparisons by processing elements in pairs.
Find both the minimum and maximum elements in a sequence using a single pass.
Find the index of the maximum element in an unsorted sequence.
Find the index of the minimum element in an unsorted sequence.
Find all indices where a target value appears in a sequence.
Step-by-step evolution of Turing machine configurations and how computations are represented as traces.
Understanding how the cost of an algorithm grows with input size.
How to think step by step and turn mathematical ideas into clear procedures.
Overview of algorithmic thinking, computational methods, complexity, approximation, and verification in mathematics.
Select the appropriate sorting algorithm based on input size, data characteristics, memory constraints, and required guarantees such as stability or worst-case bounds.
Identify boundary errors, broken invariants, and comparator mistakes that cause sorting implementations to fail on edge cases or duplicate-heavy inputs.
Verify both the ordering and permutation properties of sorting implementations using randomized, edge-case, and adversarial test strategies.
Distribute sorting work across multiple processors to reduce wall-clock time, with analysis of total work, span, communication, and synchronization.
Prove that any comparison-based sorting algorithm requires Ω(n log n) comparisons in the worst case using a decision tree argument.
Count pairs of elements in the wrong relative order to measure how far an array is from sorted, using a modified merge sort in O(n log n) time.
Replace large or sparse keys with small dense ranks that preserve order, making range-based and indexed structures practical on wide-valued data.
Sort structured values by one or more fields while moving the full record, with attention to key extraction, stability, and multi-key ordering.
Define correct comparison relations for user-defined types and non-trivial orderings — consistency requirements that sorting correctness depends on.
Exploit near-sorted structure in inputs like append-only logs or incremental updates to sort in linear or near-linear time.
Sort datasets that exceed main memory by organizing the algorithm around sequential disk access, merge passes, and minimizing I/O operations.
Find the middle value of a collection in linear time using selection algorithms, without the overhead of a full sort.
Find the element at a given rank using quicksort's partition step but recursing into only one side, achieving expected linear time.
Find the largest or smallest k elements without sorting the full input, using a heap or partition-based approach.
Produce only the smallest k elements in sorted order rather than sorting the entire array, reducing unnecessary work when the full order is not needed.
A stable sort preserves the original relative order of equal keys — an extra guarantee required when sorting by secondary fields or compound criteria.
Distribute elements into buckets by value range, sort each bucket, then concatenate — achieving linear expected time on uniformly distributed input.
Sort keys digit by digit using a stable subroutine, achieving linear time for fixed-width integers without any key comparisons.
Sort integer keys from a small range in linear time by counting occurrences and reconstructing the output from those counts.
Build a max-heap in place, then repeatedly extract the maximum to produce a sorted array in O(n log n) worst-case time.
Partition around a pivot so smaller elements go left and larger go right, then recursively sort each partition in expected O(n log n) time.
Divide the input into halves, recursively sort each half, then merge them — combining local order into global order in O(n log n) time.
Build a sorted prefix one element at a time by inserting each new element into its correct position within the already-sorted portion.
Sort by repeatedly selecting the minimum element from the unsorted suffix and placing it into the next output position.
A sorting algorithm is correct only when its output is both ordered and a permutation of the input — two properties every implementation must preserve.
Examine hash-based structures in complete systems: streaming pipelines, graph algorithms, caches, and distributed workflows.
Choose and implement hash tables that perform reliably under mixed key types, uneven access patterns, and adversarial input.
Design hash table layouts that minimize cache misses and align memory access patterns with hardware behavior.
Combine hash tables with other data structures to handle skewed distributions, heavy deletions, and mixed workloads.
Achieve predictable worst-case bounds for hash-based structures rather than relying solely on average-case expectations.
Verify correctness, stability, and performance of hash-based structures through randomized and adversarial test strategies.
Produce stable hash values that remain consistent across program runs, machines, builds, and language runtimes.
Defend hash tables against adversarial inputs that force worst-case collision behavior using randomized hashing.
Understand how memory hierarchy effects cause hash table performance to deviate from asymptotic expectations.
Remove duplicate entries from a dataset or stream efficiently using hash sets for membership tracking.
Join two collections by key using a hash table to reduce the cost from quadratic to linear expected time.
Distribute keys across a dynamic set of nodes so that adding or removing nodes moves only a minimal fraction of keys.
Approximate frequency counting for large key streams using a compact probabilistic data structure with bounded error.
Test set membership approximately using a compact bit array and multiple hash functions, with no false negatives and bounded false positives.
Compute hash values for sliding windows over a sequence in constant time by incrementally updating rather than recomputing.
Hash and compare multi-field keys correctly by combining all fields that participate in equality into the hash function.
Partition a collection into groups by key using a hash map that accumulates values into per-key lists or sets.
Track frequency counts for keys using a hash map that increments a counter on each insertion of an existing key.
Build a hash-based map that associates keys with values and supports insert, lookup, delete, and update in expected constant time.
Implement a hash-based set for fast membership testing, insertion, and deletion without associated values.
Rebuild a hash table's bucket array after resizing so that every stored key satisfies the placement invariant for the new capacity.
Control hash table performance by monitoring the load factor and growing the bucket array before collisions accumulate.
Resolve hash collisions using separate chaining or open addressing, with trade-offs in memory, locality, and load tolerance.
Design and evaluate hash functions that distribute keys uniformly across buckets while remaining fast to compute.
A data structure that supports fast insertion, lookup, and deletion by mapping keys to bucket positions via a hash function.
This section combines multiple patterns from the chapter into complete, end-to-end linked list algorithms.
Linked list algorithms are dominated by pointer traversal and constant-time link updates.
Pointer code should be tested by checking structure, not only values.
Edge cases are inputs that sit near the boundary of an algorithm's assumptions.
An LRU cache stores a fixed number of key-value entries and removes the least recently used entry when capacity is exceeded.
A queue is a first-in, first-out structure.
A stack is a last-in, first-out (LIFO) structure.
An iterator is an object or procedure that visits the nodes of a linked list one at a time.
Memory ownership describes which part of a program is responsible for creating, linking, unlinking, and destroying a node.
An intrusive list stores the linkage fields inside the objects being linked.
A skip list augments a sorted linked list with multiple levels of forward pointers.
A persistent list is a list that preserves older versions after an update.
Pointer aliasing occurs when two or more references point to the same node.
A dummy head is a fixed node placed before the real head of a singly linked list.
Insertion adds nodes into a linked list by creating new links while preserving reachability of all existing nodes.
Deletion removes one or more nodes from a linked list by changing links around them.
Merging is a family of constructions that combine multiple linked lists into one or more output lists while preserving structural invariants.
Splitting a linked list means cutting one list into two or more lists while preserving the original nodes.
Merging combines two sorted singly linked lists into one sorted list by relinking nodes.
Fast and slow pointers are two references that traverse the same linked structure at different speeds.
A cycle exists in a linked list when some node’s `next` pointer eventually leads back to a previously visited node.
Reversal transforms a linked list so that the direction of all edges is flipped.
A sentinel node is an artificial node placed at the boundary of a linked list.
A doubly linked list is a sequence of nodes where each node stores a value, a reference to the next node, and a reference to the previous node.
A singly linked list is a sequence of nodes where each node stores a value and a reference to the next node.
Linked lists are the first data structure in this book where the shape of the data matters as much as the values stored inside it.
Boundary conditions define the valid domain of indices, ranges, and states in an algorithm.
Array and string problems often look different on the surface, but many reduce to a small number of reusable patterns.
Flood fill explores a connected region in a grid starting from a seed cell and marks or transforms all cells that belong to the same region.
Spiral traversal visits a matrix layer by layer, moving right across the top row, down the right column, left across the bottom row, and up the left column,...
Matrix traversal processes a two-dimensional array in a defined order.
A trie is a tree structure for storing a set of strings so that common prefixes are shared.
Parsing expressions converts a sequence of tokens into a structured form that reflects operator precedence and associativity.
String comparison determines the ordering or equality of two strings.
Rolling hashes assign numeric fingerprints to substrings so that many substring comparisons can be done quickly.
Run-Length Encoding (RLE) compresses sequences by replacing consecutive equal values with a pair `(value, count)`.
A frequency table records how many times each value appears in an array, string, or stream.
Anagrams are strings or sequences that contain the same elements with the same multiplicities, possibly in different order.
A palindrome is a sequence that reads the same forward and backward.
Substring search locates occurrences of a pattern `p` inside a text `s`.
Tokenization converts a string into a sequence of meaningful units called tokens.
String scanning is the basic operation behind parsing, tokenization, validation, search, and text normalization.
Deduplication removes repeated values while preserving a chosen notion of identity and, optionally, order.
In-place modification changes an array without allocating another array of the same size.
Array rotation moves elements by a fixed offset while preserving their relative circular order.
Partitioning rearranges an array so that elements are grouped by a predicate.
Sliding windows maintain a contiguous subarray `[l, r)` while both endpoints move forward.
The two pointers technique uses two indices that move through an array or string in a controlled way.
A difference array is the inverse pattern of a prefix sum.
Prefix sums are a preprocessing technique for answering repeated range sum queries on an array.
Array traversal is the base operation for all algorithms over linear data.
This chapter studies linear data as the primary substrate for algorithm design.
Even when the algorithmic idea is correct, implementations fail in predictable ways.
Benchmarking measures how an implementation behaves on real inputs and real hardware.
Stability and determinism describe how predictably an algorithm behaves when there are ties, repeated values, or multiple valid answers.
Algorithms are usually described with mathematical integers and real numbers.
Data representation is the choice of concrete form used to store the objects in a problem.
Implementation discipline means translating an algorithm into code without changing its meaning accidentally.
Pseudocode is a bridge between the problem statement and an implementation.
A reduction transforms one problem into another problem.
Amortized analysis studies the average cost of operations over a sequence, even when individual operations are sometimes expensive.
Randomized algorithms use random choices during execution.
Dynamic programming solves problems by storing answers to subproblems and reusing them.
Divide and conquer solves a problem by splitting it into smaller subproblems, solving those subproblems, and combining their answers.
A greedy algorithm builds a solution by making one locally best choice at a time.
A brute force baseline is the simplest correct algorithm you can write from the problem statement.
Testing does not prove an algorithm correct, but it exposes mistakes in specifications, invariants, edge cases, and implementation details.
Edge cases are valid inputs that sit at the boundary of the specification.
A lower bound states that every algorithm for a problem must perform at least a certain amount of work in some model of computation.
Big O notation provides a formal way to describe how a function grows.
Space complexity measures how much memory an algorithm uses as a function of input size.
Time complexity describes how the running time of an algorithm grows as the input size grows.
Recursive algorithms replace loop structure with self-reference.
Loop invariants are the primary tool for reasoning about iterative algorithms.
A correctness argument explains why an algorithm returns an acceptable output for every valid input.
An algorithm does not operate on an abstract idea of data.
An algorithm begins with a precise statement of the problem.
This chapter defines how you think about algorithms before you write any code.
This volume studies theoretical and practical foundations of computation.
Sorting algorithms from fundamental contracts through comparison-based, linear-time, and specialized variants, with selection, external sorting, and correctness analysis.
Hash tables, hash functions, collision handling, probabilistic structures, and practical design for constant-time key-value access.