An LRU cache stores a fixed number of key-value entries and removes the least recently used entry when capacity is exceeded.
An LRU cache stores a fixed number of key-value entries and removes the least recently used entry when capacity is exceeded. It combines two data structures: a hash map for O(1) lookup and a doubly linked list for O(1) recency updates.
The hash map answers “where is the entry for this key?” The list answers “which entry was used least recently?”
Problem
Design a cache with these operations:
get(key) -> value or missing
put(key, value)Both operations should run in O(1) average time.
When an entry is accessed or inserted, it becomes the most recently used entry. When the cache exceeds capacity, the least recently used entry is removed.
Structure
Each cache entry is stored as a linked list node.
Node {
key
value
prev : Node | null
next : Node | null
}The cache stores:
Cache {
capacity
map : key -> Node
head : Node
tail : Node
}Use two sentinel nodes:
head <-> most_recent <-> ... <-> least_recent <-> tailThe real entries are between head and tail.
The empty cache is:
head.next = tail
tail.prev = headList Operations
The cache needs two primitive list operations.
Remove a node:
remove(node):
left = node.prev
right = node.next
left.next = right
right.prev = left
node.prev = null
node.next = nullInsert after head, making the node most recently used:
insert_front(node):
first = head.next
node.prev = head
node.next = first
head.next = node
first.prev = nodeThese operations are O(1).
Get
get(key):
if key not in map:
return missing
node = map[key]
remove(node)
insert_front(node)
return node.valueA successful read updates recency. The accessed node moves to the front.
Put
put(key, value):
if key in map:
node = map[key]
node.value = value
remove(node)
insert_front(node)
return
node = Node(key, value)
map[key] = node
insert_front(node)
if map.size > capacity:
victim = tail.prev
remove(victim)
delete map[victim.key]When the capacity is exceeded, the victim is the node immediately before tail.
Invariant
The hash map and linked list must agree.
For every key in map, map[key] is exactly one real node in the list.
Every real node in the list has its key in map.
head and tail are sentinels and never appear in map.The order of real nodes represents recency:
closer to head means more recently used
closer to tail means less recently usedExample
Capacity is 2.
put(1, A)
put(2, B)List:
head <-> 2 <-> 1 <-> tailAccess key 1.
get(1)List:
head <-> 1 <-> 2 <-> tailInsert key 3.
put(3, C)Key 2 is least recently used and is evicted.
head <-> 3 <-> 1 <-> tailMap contains keys 1 and 3.
Complexity
| Operation | Time | Space |
|---|---|---|
| Lookup in map | O(1) average | O(1) |
| Move node to front | O(1) | O(1) |
| Insert entry | O(1) average | O(1) |
| Evict least recent | O(1) | O(1) |
| Total storage | O(capacity) | O(capacity) |
Why a Singly Linked List Is Not Enough
Eviction removes the tail-side entry. Moving an accessed node also requires removing it from its current position. In a singly linked list, removing a known node requires finding its predecessor, which costs O(n). A doubly linked list stores the predecessor directly, so removal is O(1).
Common Failure Modes
A common bug is removing a node from the list but forgetting to delete it from the map. This leaves a map entry pointing to a detached node.
Another bug is updating the value of an existing key without moving the node to the front. In LRU semantics, both reads and writes usually count as use.
A third bug is evicting tail or head instead of the real node adjacent to a sentinel. The victim is tail.prev, not tail.
A fourth bug is inserting the same node twice without removing it first. This usually corrupts the list.
Key Insight
An LRU cache works because it separates lookup from ordering. The hash map gives direct access to nodes by key. The doubly linked list maintains recency order and supports constant-time movement and eviction. The two structures must be updated together as one logical operation.