Skip to content

LeetCode 146: LRU Cache

Design an LRU cache with O(1) get and put operations using a hash map and doubly linked list.

Problem Restatement

Design a data structure that supports:

OperationMeaning
get(key)Return the value if the key exists, otherwise -1
put(key, value)Insert or update the key-value pair

The cache has a fixed capacity.

When the cache exceeds capacity, remove the least recently used item.

Both operations must run in:

O(1)

average time.

LeetCode defines the least recently used item as the entry that has not been accessed or updated for the longest time. (leetcode.com)

Example

cache = LRUCache(2)

cache.put(1, 1)
cache.put(2, 2)

cache.get(1)

Returns:

1

because key 1 exists.

Now key 1 becomes the most recently used item.

Then:

cache.put(3, 3)

The cache is full.

The least recently used key is 2, so it is removed.

Now:

cache.get(2)

returns:

-1

because key 2 was evicted.

Required Operations

We need two capabilities:

NeedDesired complexity
Find a key quicklyO(1)
Remove and reorder recently used items quicklyO(1)

A single structure alone cannot efficiently do both.

First Thought: Use a Hash Map

A hash map gives:

O(1)

lookup.

Example:

cache[key] -> value

But a hash map does not maintain usage order.

We do not know which key is the least recently used.

Second Thought: Use a List

We could store usage order in a list:

least recent -> most recent

Whenever a key is accessed:

  1. remove it from the list
  2. append it to the end

But removing an arbitrary item from a normal list costs:

O(n)

because elements must shift.

That violates the requirement.

Key Insight

Combine:

StructurePurpose
Hash mapFast key lookup
Doubly linked listFast insertion/removal and usage ordering

The linked list stores nodes ordered by recency.

We maintain:

head <-> ... <-> tail

where:

SideMeaning
Near headLeast recently used
Near tailMost recently used

Whenever a key is used:

  1. remove its node from the list
  2. move it to the tail

Whenever capacity is exceeded:

  1. remove the node next to the head
  2. delete its key from the hash map

Why Doubly Linked List Matters

With a singly linked list, removing a node requires knowing the previous node.

That is inconvenient in O(1).

A doubly linked list allows:

node.prev
node.next

So removal is direct.

Dummy Head and Tail

Use dummy nodes:

head <-> real nodes <-> tail

Advantages:

  1. No empty-list edge cases
  2. No special handling for inserting at front or back
  3. Simpler pointer logic

The real least recently used node is:

head.next

The real most recently used node is:

tail.prev

Operations

Remove a Node

Suppose:

A <-> node <-> B

We reconnect:

A <-> B

Implementation:

prev_node.next = next_node
next_node.prev = prev_node

Insert at Tail

Suppose:

... <-> last <-> tail

Insert:

... <-> last <-> node <-> tail

This marks the node as most recently used.

get(key)

If the key does not exist:

return -1

Otherwise:

  1. move the node to the tail
  2. return its value

Moving to the tail marks it as recently used.

put(key, value)

If the key already exists:

  1. update its value
  2. move it to the tail

Otherwise:

  1. create a new node
  2. insert it at the tail
  3. add it to the hash map

If capacity is exceeded:

  1. remove the least recently used node
  2. delete its key from the map

Correctness

The hash map always maps each key to its corresponding linked-list node.

The doubly linked list always stores nodes in recency order:

least recent -> most recent

Whenever a key is accessed through get or updated through put, its node is removed from its current position and inserted at the tail. Therefore, the tail side always contains the most recently used items.

Whenever capacity is exceeded, the node immediately after the dummy head is removed. Since this node is the least recently used item in the ordering, eviction is correct.

All pointer updates and hash map operations take constant time, so both get and put run in O(1) average time.

Therefore, the data structure correctly implements an LRU cache.

Complexity

Let n be the cache capacity.

OperationTimeWhy
getO(1)Hash lookup and linked-list updates
putO(1)Hash updates and linked-list updates

Space complexity:

MetricValue
SpaceO(n)

The cache stores at most n nodes.

Implementation

class Node:
    def __init__(self, key=0, value=0):
        self.key = key
        self.value = value
        self.prev = None
        self.next = None

class LRUCache:
    def __init__(self, capacity: int):
        self.capacity = capacity
        self.cache = {}

        self.head = Node()
        self.tail = Node()

        self.head.next = self.tail
        self.tail.prev = self.head

    def _remove(self, node: Node) -> None:
        prev_node = node.prev
        next_node = node.next

        prev_node.next = next_node
        next_node.prev = prev_node

    def _insert_at_tail(self, node: Node) -> None:
        prev_tail = self.tail.prev

        prev_tail.next = node
        node.prev = prev_tail

        node.next = self.tail
        self.tail.prev = node

    def get(self, key: int) -> int:
        if key not in self.cache:
            return -1

        node = self.cache[key]

        self._remove(node)
        self._insert_at_tail(node)

        return node.value

    def put(self, key: int, value: int) -> None:
        if key in self.cache:
            node = self.cache[key]
            node.value = value

            self._remove(node)
            self._insert_at_tail(node)

            return

        node = Node(key, value)

        self.cache[key] = node
        self._insert_at_tail(node)

        if len(self.cache) > self.capacity:
            lru = self.head.next

            self._remove(lru)
            del self.cache[lru.key]

Code Explanation

The node stores:

key
value
prev
next

The cache dictionary maps:

key -> node

The dummy nodes simplify operations:

self.head = Node()
self.tail = Node()

The helper:

_remove(node)

disconnects a node from the list.

The helper:

_insert_at_tail(node)

marks a node as most recently used.

In get:

self._remove(node)
self._insert_at_tail(node)

updates usage order.

In put:

  • existing keys are updated and moved
  • new keys are inserted

If capacity is exceeded:

lru = self.head.next

identifies the least recently used node.

Then:

del self.cache[lru.key]

removes it from the hash map.

Testing

def run_tests():
    cache = LRUCache(2)

    cache.put(1, 1)
    cache.put(2, 2)

    assert cache.get(1) == 1

    cache.put(3, 3)

    assert cache.get(2) == -1

    cache.put(4, 4)

    assert cache.get(1) == -1
    assert cache.get(3) == 3
    assert cache.get(4) == 4

    cache = LRUCache(1)

    cache.put(1, 1)
    assert cache.get(1) == 1

    cache.put(2, 2)

    assert cache.get(1) == -1
    assert cache.get(2) == 2

    cache = LRUCache(2)

    cache.put(2, 1)
    cache.put(2, 2)

    assert cache.get(2) == 2

    print("all tests passed")

run_tests()

Test meaning:

TestWhy
Standard LeetCode exampleConfirms correct eviction order
Capacity 1Smallest nontrivial cache
Updating existing keyEnsures overwrite works correctly
Repeated accessConfirms recency updates