Skip to content

3.24 Complexity Analysis

Linked list algorithms are dominated by pointer traversal and constant-time link updates.

Linked list algorithms are dominated by pointer traversal and constant-time link updates. The key to correct analysis is to count how many nodes are visited and how many pointers are changed. Unlike arrays, there is no random access, so position-based operations depend on traversal length.

Cost Model

Use these basic costs:

read next pointer: O(1)
write next pointer: O(1)
allocate node: O(1)
free node: O(1)
compare values: O(1)

A loop that advances cur = cur.next over k nodes costs O(k).

Traversal

Full traversal:

cur = head
while cur != null:
  cur = cur.next

visits each node once.

OperationTimeSpace
Traverse n nodesO(n)O(1)

Early exit reduces cost to O(k) where k is the index of the stop condition.

Search

Linear search:

cur = head
while cur != null:
  if cur.value == target:
    return cur
  cur = cur.next
CaseTime
Best (first node)O(1)
AverageO(n)
Worst (not found or last)O(n)

Insertion

Insertion after a known node is constant.

new.next = cur.next
cur.next = new
PatternTime
Insert at headO(1)
Insert after known nodeO(1)
Insert at tail with tail pointerO(1)
Insert at position k (unknown predecessor)O(k)

Without a tail pointer, inserting at the end requires O(n) traversal.

Deletion

Deletion after a known predecessor is constant.

prev.next = cur.next
PatternTime
Delete headO(1)
Delete after known nodeO(1)
Delete by valueO(n)
Delete nth from endO(n)

In singly linked lists, deletion of a node without its predecessor requires either traversal or special techniques.

Reversal

Iterative reversal:

while cur != null:
  ...

Each node is processed once.

OperationTimeSpace
Reverse listO(n)O(1)

Recursive reversal uses O(n) stack space.

Splitting

Splitting at position k:

OperationTime
Split after k nodesO(k)
Split into halves (fast/slow)O(n)

The fast/slow pointer method still traverses the list once.

Merging

Merging two lists of sizes n and m:

OperationTimeSpace
Merge two sorted listsO(n + m)O(1)
Merge k lists with heapO(N log k)O(k)

Each node is processed exactly once in the two-list merge.

Two-Pointer Techniques

Fast/slow pointer methods traverse the list once.

OperationTimeSpace
Find middleO(n)O(1)
Detect cycleO(n)O(1)

Even though one pointer moves faster, total work remains linear.

Stack and Queue

Using linked lists:

StructureOperationTime
StackpushO(1)
StackpopO(1)
Queueenqueue (with tail)O(1)
QueuedequeueO(1)

Without a tail pointer, enqueue becomes O(n).

LRU Cache

Combining hash map and doubly linked list:

OperationTime
getO(1) average
putO(1) average
evictionO(1)

The list handles ordering. The map handles lookup.

Amortized Analysis

Some operations appear constant but rely on occasional expensive steps.

Example: dynamic resizing of auxiliary structures or batch operations.

Linked list operations themselves are usually worst-case O(1) or O(n), so amortized analysis appears more often in structures built on top of lists, such as skip lists or buffered queues.

Space Complexity

A singly linked list stores one pointer per node.

space = O(n)

A doubly linked list stores two pointers per node.

space = O(n)

Additional overhead depends on the algorithm:

AlgorithmExtra Space
Iterative traversalO(1)
Recursive traversalO(n) stack
Merge with heapO(k)
Snapshot iteratorO(n)
Persistent list updateO(k)

Cache Behavior

Linked lists have poor cache locality compared to arrays. Nodes may be scattered in memory.

Implications:

  • Traversal may be slower in practice than array traversal, even with the same O(n) bound
  • Pointer chasing limits instruction-level parallelism
  • Prefetching is less effective

For performance-critical code, arrays or contiguous structures may be preferable.

Lower Bounds

Without additional structure:

  • Searching an unsorted linked list requires O(n) time
  • Finding the kth element requires O(k) time
  • Random access is not possible in sublinear time

These bounds follow from the sequential access model.

Common Misanalysis

A frequent mistake is assuming constant time for operations that require traversal.

Example:

insert at position k

is O(k), not O(1), unless the predecessor is already known.

Another mistake is ignoring the cost of maintaining auxiliary pointers, such as updating tail or maintaining doubly linked consistency.

Summary Table

OperationTimeSpace
TraverseO(n)O(1)
SearchO(n)O(1)
Insert (known position)O(1)O(1)
Insert (by index)O(n)O(1)
Delete (known predecessor)O(1)O(1)
Delete (by value)O(n)O(1)
ReverseO(n)O(1)
MergeO(n + m)O(1)
SplitO(n)O(1)

Key Insight

The dominant cost in linked list algorithms is traversal. Once a node is located, local updates are constant time. Efficient designs either avoid repeated traversal, store additional pointers such as tail, or change the problem so that required nodes are already known when updates occur.