Analyze the average cost per operation over a sequence of dynamic array operations.
Amortized analysis studies the total cost of a sequence of operations and divides it across the sequence. For dynamic arrays, a single append may be expensive when it triggers resizing, but many appends remain cheap on average.
You use it when occasional costly operations are balanced by many cheap operations.
Problem
Given a dynamic array that doubles capacity when full, show that appending elements takes total time:
Therefore, the amortized cost per append is:
Structure
A dynamic array maintains:
- size: number of stored elements
- capacity: number of allocated slots
- data: contiguous backing storage
When the array is full, capacity doubles:
Algorithm
Append with doubling:
append(A, x):
if size == capacity:
B = allocate(2 * capacity)
copy all elements from A into B
A = B
capacity = 2 * capacity
A[size] = x
size += 1Example
Start with capacity and append elements.
| append | capacity before | resize cost | write cost | total cost |
|---|---|---|---|---|
| 1 | 1 | 0 | 1 | 1 |
| 2 | 1 | 1 | 1 | 2 |
| 3 | 2 | 2 | 1 | 3 |
| 4 | 4 | 0 | 1 | 1 |
| 5 | 4 | 4 | 1 | 5 |
| 6 | 8 | 0 | 1 | 1 |
| 7 | 8 | 0 | 1 | 1 |
| 8 | 8 | 0 | 1 | 1 |
Total cost:
The average cost is:
which is constant.
Correctness
The resizing rule preserves all existing elements by copying them into the new backing array in order. After the resize, capacity exceeds size, so the new element can be written safely.
The amortized bound follows because elements are copied only when capacity doubles. Across appends, the copy costs form a geometric series:
Adding the direct writes gives total cost less than:
Thus the total cost is linear, and the amortized cost per append is constant.
Complexity
| operation | time |
|---|---|
| ordinary append | |
| resizing append | |
| amortized append | |
| appends |
Space usage is:
The allocated capacity is at most a constant factor larger than the number of stored elements.
When to Use
Amortized analysis is appropriate when:
- a data structure has occasional expensive maintenance steps
- total cost over many operations matters more than one operation
- resizing, rebuilding, or rebalancing happens predictably
It is less suitable when every single operation must meet a strict worst-case latency bound.
Implementation
class DynamicArray:
def __init__(self):
self.a = [None]
self.size = 0
def append(self, x):
if self.size == len(self.a):
b = [None] * (2 * len(self.a))
for i in range(self.size):
b[i] = self.a[i]
self.a = b
self.a[self.size] = x
self.size += 1type DynamicArray[T any] struct {
data []T
size int
}
func NewDynamicArray[T any]() *DynamicArray[T] {
return &DynamicArray[T]{
data: make([]T, 1),
}
}
func (d *DynamicArray[T]) Append(x T) {
if d.size == len(d.data) {
next := make([]T, 2*len(d.data))
copy(next, d.data)
d.data = next
}
d.data[d.size] = x
d.size++
}