Linked list algorithms are dominated by pointer traversal and constant-time link updates.
Linked list algorithms are dominated by pointer traversal and constant-time link updates. The key to correct analysis is to count how many nodes are visited and how many pointers are changed. Unlike arrays, there is no random access, so position-based operations depend on traversal length.
Cost Model
Use these basic costs:
read next pointer: O(1)
write next pointer: O(1)
allocate node: O(1)
free node: O(1)
compare values: O(1)A loop that advances cur = cur.next over k nodes costs O(k).
Traversal
Full traversal:
cur = head
while cur != null:
cur = cur.nextvisits each node once.
| Operation | Time | Space |
|---|---|---|
| Traverse n nodes | O(n) | O(1) |
Early exit reduces cost to O(k) where k is the index of the stop condition.
Search
Linear search:
cur = head
while cur != null:
if cur.value == target:
return cur
cur = cur.next| Case | Time |
|---|---|
| Best (first node) | O(1) |
| Average | O(n) |
| Worst (not found or last) | O(n) |
Insertion
Insertion after a known node is constant.
new.next = cur.next
cur.next = new| Pattern | Time |
|---|---|
| Insert at head | O(1) |
| Insert after known node | O(1) |
| Insert at tail with tail pointer | O(1) |
| Insert at position k (unknown predecessor) | O(k) |
Without a tail pointer, inserting at the end requires O(n) traversal.
Deletion
Deletion after a known predecessor is constant.
prev.next = cur.next| Pattern | Time |
|---|---|
| Delete head | O(1) |
| Delete after known node | O(1) |
| Delete by value | O(n) |
| Delete nth from end | O(n) |
In singly linked lists, deletion of a node without its predecessor requires either traversal or special techniques.
Reversal
Iterative reversal:
while cur != null:
...Each node is processed once.
| Operation | Time | Space |
|---|---|---|
| Reverse list | O(n) | O(1) |
Recursive reversal uses O(n) stack space.
Splitting
Splitting at position k:
| Operation | Time |
|---|---|
| Split after k nodes | O(k) |
| Split into halves (fast/slow) | O(n) |
The fast/slow pointer method still traverses the list once.
Merging
Merging two lists of sizes n and m:
| Operation | Time | Space |
|---|---|---|
| Merge two sorted lists | O(n + m) | O(1) |
| Merge k lists with heap | O(N log k) | O(k) |
Each node is processed exactly once in the two-list merge.
Two-Pointer Techniques
Fast/slow pointer methods traverse the list once.
| Operation | Time | Space |
|---|---|---|
| Find middle | O(n) | O(1) |
| Detect cycle | O(n) | O(1) |
Even though one pointer moves faster, total work remains linear.
Stack and Queue
Using linked lists:
| Structure | Operation | Time |
|---|---|---|
| Stack | push | O(1) |
| Stack | pop | O(1) |
| Queue | enqueue (with tail) | O(1) |
| Queue | dequeue | O(1) |
Without a tail pointer, enqueue becomes O(n).
LRU Cache
Combining hash map and doubly linked list:
| Operation | Time |
|---|---|
| get | O(1) average |
| put | O(1) average |
| eviction | O(1) |
The list handles ordering. The map handles lookup.
Amortized Analysis
Some operations appear constant but rely on occasional expensive steps.
Example: dynamic resizing of auxiliary structures or batch operations.
Linked list operations themselves are usually worst-case O(1) or O(n), so amortized analysis appears more often in structures built on top of lists, such as skip lists or buffered queues.
Space Complexity
A singly linked list stores one pointer per node.
space = O(n)A doubly linked list stores two pointers per node.
space = O(n)Additional overhead depends on the algorithm:
| Algorithm | Extra Space |
|---|---|
| Iterative traversal | O(1) |
| Recursive traversal | O(n) stack |
| Merge with heap | O(k) |
| Snapshot iterator | O(n) |
| Persistent list update | O(k) |
Cache Behavior
Linked lists have poor cache locality compared to arrays. Nodes may be scattered in memory.
Implications:
- Traversal may be slower in practice than array traversal, even with the same O(n) bound
- Pointer chasing limits instruction-level parallelism
- Prefetching is less effective
For performance-critical code, arrays or contiguous structures may be preferable.
Lower Bounds
Without additional structure:
- Searching an unsorted linked list requires O(n) time
- Finding the kth element requires O(k) time
- Random access is not possible in sublinear time
These bounds follow from the sequential access model.
Common Misanalysis
A frequent mistake is assuming constant time for operations that require traversal.
Example:
insert at position kis O(k), not O(1), unless the predecessor is already known.
Another mistake is ignoring the cost of maintaining auxiliary pointers, such as updating tail or maintaining doubly linked consistency.
Summary Table
| Operation | Time | Space |
|---|---|---|
| Traverse | O(n) | O(1) |
| Search | O(n) | O(1) |
| Insert (known position) | O(1) | O(1) |
| Insert (by index) | O(n) | O(1) |
| Delete (known predecessor) | O(1) | O(1) |
| Delete (by value) | O(n) | O(1) |
| Reverse | O(n) | O(1) |
| Merge | O(n + m) | O(1) |
| Split | O(n) | O(1) |
Key Insight
The dominant cost in linked list algorithms is traversal. Once a node is located, local updates are constant time. Efficient designs either avoid repeated traversal, store additional pointers such as tail, or change the problem so that required nodes are already known when updates occur.