# 1.17 Amortized Analysis

Amortized analysis studies the average cost of operations over a sequence, even when individual operations are sometimes expensive. It is used when a data structure has rare costly steps that are paid for by many cheap steps.

The guarantee is different from average-case analysis. Amortized analysis does not require random input or a probability distribution. It gives a worst-case bound over any valid sequence of operations.

## Problem

You have an operation that is usually cheap but occasionally expensive, and you need a meaningful complexity bound for a long sequence of operations.

## Solution

Analyze the total cost of a sequence, then divide by the number of operations.

Use this structure:

```text
1. Define the operation sequence.
2. Bound the total cost over m operations.
3. Divide by m to obtain the amortized cost per operation.
```

If the total cost of `m` operations is `O(m)`, then the amortized cost per operation is `O(1)`.

## Example: Dynamic Array Append

A dynamic array stores elements in a fixed-size buffer. When the buffer becomes full, it allocates a larger buffer and copies all existing elements.

```text
append(x):
  if size == capacity:
    allocate new array with capacity = 2 * capacity
    copy old elements into new array

  A[size] = x
  size = size + 1
```

Most appends cost `O(1)`. An append that triggers resizing costs `O(n)` because it copies existing elements.

The amortized cost is still `O(1)`.

## Aggregate Analysis

Assume the array starts with capacity 1 and doubles whenever full.

During `m` appends, copying happens at sizes:

```text
1, 2, 4, 8, ..., up to less than m
```

The total number of copied elements is:

```text
1 + 2 + 4 + 8 + ... < 2m
```

Each append also performs one ordinary insertion, so the total cost is `O(m)`.

Therefore, the amortized cost per append is:

```text
O(1)
```

## Accounting Method

The accounting method assigns each operation an artificial charge. Cheap operations save credit. Expensive operations spend saved credit.

For dynamic array append, charge each append 3 units:

```text
1 unit pays for the immediate insertion.
2 units are saved as credit for future copying.
```

When resizing occurs, the saved credits from previous insertions pay for copying old elements. Because the array doubles, enough credit has accumulated to pay for the resize.

This proves constant amortized cost.

## Potential Method

The potential method assigns stored "energy" to the data structure state. The amortized cost of an operation is:

```text
actual cost + change in potential
```

For a dynamic array, a common potential function is based on how full the array is. When the array is far from full, potential is low. As it approaches capacity, potential increases. The expensive resize spends that potential.

The potential method is useful for more complex data structures because it gives a precise algebraic way to track saved work.

## Amortized vs Worst-Case

| Bound Type               | Meaning                                                 |
| ------------------------ | ------------------------------------------------------- |
| Worst-case per operation | Every single operation costs at most this much          |
| Amortized per operation  | Every long sequence has this average cost per operation |
| Average-case             | Expected cost under a probability distribution          |

A dynamic array append has:

```text
Worst-case per append: O(n)
Amortized per append: O(1)
```

Both statements are true. They answer different questions.

## Common Examples

| Structure or Algorithm   | Amortized Behavior                  |
| ------------------------ | ----------------------------------- |
| Dynamic array append     | `O(1)` amortized                    |
| Stack with multipop      | `O(1)` amortized per operation      |
| Binary counter increment | `O(1)` amortized bit flips          |
| Union-find               | Nearly constant amortized time      |
| Splay tree               | `O(log n)` amortized operation cost |

## Example: Binary Counter

A binary counter increments by flipping trailing `1` bits to `0`, then flipping the first `0` bit to `1`.

Some increments flip many bits. But over many increments, bit `0` flips every time, bit `1` flips every 2 increments, bit `2` flips every 4 increments, and so on.

Over `m` increments, total bit flips are bounded by:

```text
m + m/2 + m/4 + m/8 + ... < 2m
```

So the amortized number of bit flips per increment is:

```text
O(1)
```

## Common Pitfalls

Do not confuse amortized cost with average cost over random inputs. Amortized analysis is deterministic.

Do not claim every operation is cheap. Some individual operations may still be expensive.

Do not use amortized bounds when latency of a single operation matters. A real-time system may need worst-case guarantees instead.

Do not forget to define the sequence of operations. Amortized analysis only makes sense over sequences.

## Takeaway

Amortized analysis explains why rare expensive operations can still give efficient long-run behavior. Instead of bounding each operation separately, it bounds the total cost of a sequence and derives a per-operation guarantee.

