# Amortized Analysis

# Amortized Analysis

Amortized analysis studies the total cost of a sequence of operations and divides it across the sequence. For dynamic arrays, a single append may be expensive when it triggers resizing, but many appends remain cheap on average.

You use it when occasional costly operations are balanced by many cheap operations.

## Problem

Given a dynamic array that doubles capacity when full, show that appending $n$ elements takes total time:

$$
O(n)
$$

Therefore, the amortized cost per append is:

$$
O(1)
$$

## Structure

A dynamic array maintains:

* size: number of stored elements
* capacity: number of allocated slots
* data: contiguous backing storage

When the array is full, capacity doubles:

$$
capacity' = 2 \cdot capacity
$$

## Algorithm

Append with doubling:

```id="amortized-append"
append(A, x):
    if size == capacity:
        B = allocate(2 * capacity)
        copy all elements from A into B
        A = B
        capacity = 2 * capacity

    A[size] = x
    size += 1
```

## Example

Start with capacity $1$ and append $8$ elements.

| append | capacity before | resize cost | write cost | total cost |
| -----: | --------------: | ----------: | ---------: | ---------: |
|      1 |               1 |           0 |          1 |          1 |
|      2 |               1 |           1 |          1 |          2 |
|      3 |               2 |           2 |          1 |          3 |
|      4 |               4 |           0 |          1 |          1 |
|      5 |               4 |           4 |          1 |          5 |
|      6 |               8 |           0 |          1 |          1 |
|      7 |               8 |           0 |          1 |          1 |
|      8 |               8 |           0 |          1 |          1 |

Total cost:

$$
1 + 2 + 3 + 1 + 5 + 1 + 1 + 1 = 15
$$

The average cost is:

$$
\frac{15}{8}
$$

which is constant.

## Correctness

The resizing rule preserves all existing elements by copying them into the new backing array in order. After the resize, capacity exceeds size, so the new element can be written safely.

The amortized bound follows because elements are copied only when capacity doubles. Across $n$ appends, the copy costs form a geometric series:

$$
1 + 2 + 4 + 8 + \dots < 2n
$$

Adding the $n$ direct writes gives total cost less than:

$$
3n
$$

Thus the total cost is linear, and the amortized cost per append is constant.

## Complexity

| operation        |   time |
| ---------------- | -----: |
| ordinary append  | $O(1)$ |
| resizing append  | $O(n)$ |
| amortized append | $O(1)$ |
| $n$ appends      | $O(n)$ |

Space usage is:

$$
O(n)
$$

The allocated capacity is at most a constant factor larger than the number of stored elements.

## When to Use

Amortized analysis is appropriate when:

* a data structure has occasional expensive maintenance steps
* total cost over many operations matters more than one operation
* resizing, rebuilding, or rebalancing happens predictably

It is less suitable when every single operation must meet a strict worst-case latency bound.

## Implementation

```python id="amortized-analysis-python"
class DynamicArray:
    def __init__(self):
        self.a = [None]
        self.size = 0

    def append(self, x):
        if self.size == len(self.a):
            b = [None] * (2 * len(self.a))
            for i in range(self.size):
                b[i] = self.a[i]
            self.a = b

        self.a[self.size] = x
        self.size += 1
```

```go id="amortized-analysis-go"
type DynamicArray[T any] struct {
    data []T
    size int
}

func NewDynamicArray[T any]() *DynamicArray[T] {
    return &DynamicArray[T]{
        data: make([]T, 1),
    }
}

func (d *DynamicArray[T]) Append(x T) {
    if d.size == len(d.data) {
        next := make([]T, 2*len(d.data))
        copy(next, d.data)
        d.data = next
    }

    d.data[d.size] = x
    d.size++
}
```

