Skip to content

Amortized Analysis

Analyze the average cost per operation over a sequence of dynamic array operations.

Amortized analysis studies the total cost of a sequence of operations and divides it across the sequence. For dynamic arrays, a single append may be expensive when it triggers resizing, but many appends remain cheap on average.

You use it when occasional costly operations are balanced by many cheap operations.

Problem

Given a dynamic array that doubles capacity when full, show that appending nn elements takes total time:

O(n) O(n)

Therefore, the amortized cost per append is:

O(1) O(1)

Structure

A dynamic array maintains:

  • size: number of stored elements
  • capacity: number of allocated slots
  • data: contiguous backing storage

When the array is full, capacity doubles:

capacity=2capacity capacity' = 2 \cdot capacity

Algorithm

Append with doubling:

append(A, x):
    if size == capacity:
        B = allocate(2 * capacity)
        copy all elements from A into B
        A = B
        capacity = 2 * capacity

    A[size] = x
    size += 1

Example

Start with capacity 11 and append 88 elements.

appendcapacity beforeresize costwrite costtotal cost
11011
21112
32213
44011
54415
68011
78011
88011

Total cost:

1+2+3+1+5+1+1+1=15 1 + 2 + 3 + 1 + 5 + 1 + 1 + 1 = 15

The average cost is:

158 \frac{15}{8}

which is constant.

Correctness

The resizing rule preserves all existing elements by copying them into the new backing array in order. After the resize, capacity exceeds size, so the new element can be written safely.

The amortized bound follows because elements are copied only when capacity doubles. Across nn appends, the copy costs form a geometric series:

1+2+4+8+<2n 1 + 2 + 4 + 8 + \dots < 2n

Adding the nn direct writes gives total cost less than:

3n 3n

Thus the total cost is linear, and the amortized cost per append is constant.

Complexity

operationtime
ordinary appendO(1)O(1)
resizing appendO(n)O(n)
amortized appendO(1)O(1)
nn appendsO(n)O(n)

Space usage is:

O(n) O(n)

The allocated capacity is at most a constant factor larger than the number of stored elements.

When to Use

Amortized analysis is appropriate when:

  • a data structure has occasional expensive maintenance steps
  • total cost over many operations matters more than one operation
  • resizing, rebuilding, or rebalancing happens predictably

It is less suitable when every single operation must meet a strict worst-case latency bound.

Implementation

class DynamicArray:
    def __init__(self):
        self.a = [None]
        self.size = 0

    def append(self, x):
        if self.size == len(self.a):
            b = [None] * (2 * len(self.a))
            for i in range(self.size):
                b[i] = self.a[i]
            self.a = b

        self.a[self.size] = x
        self.size += 1
type DynamicArray[T any] struct {
    data []T
    size int
}

func NewDynamicArray[T any]() *DynamicArray[T] {
    return &DynamicArray[T]{
        data: make([]T, 1),
    }
}

func (d *DynamicArray[T]) Append(x T) {
    if d.size == len(d.data) {
        next := make([]T, 2*len(d.data))
        copy(next, d.data)
        d.data = next
    }

    d.data[d.size] = x
    d.size++
}