# Bisection Method

# Bisection Method

The bisection method finds an approximate root of a continuous function on an interval. It applies when the function has opposite signs at the two endpoints.

If:

$$
f(l) \cdot f(r) \le 0
$$

and $f$ is continuous, then a root exists somewhere in $[l, r]$.

## Problem

Given a continuous function $f$ and an interval $[l, r]$, find a value $x$ such that:

$$
f(x) = 0
$$

In practice, the algorithm returns an approximation whose error is bounded by a chosen tolerance $\varepsilon$.

## Key Idea

Use the intermediate value theorem.

At each step:

1. Compute the midpoint $m$
2. Evaluate $f(m)$
3. Keep the half interval where the sign change still occurs

This is binary search over a continuous domain.

## Algorithm

```id="bisect-1"
bisection_method(f, l, r, eps):
    if f(l) == 0:
        return l
    if f(r) == 0:
        return r

    while r - l > eps:
        m = (l + r) / 2

        if f(m) == 0:
            return m

        if f(l) * f(m) <= 0:
            r = m
        else:
            l = m

    return (l + r) / 2
```

## Example

Find a root of:

$$
f(x) = x^2 - 2
$$

on interval $[1, 2]$.

Since:

$$
f(1) = -1
$$

and

$$
f(2) = 2
$$

there is a root in the interval. Repeated bisection converges to:

$$
\sqrt{2}
$$

## Correctness

The algorithm maintains the invariant that the current interval contains a root.

Initially, the signs at $l$ and $r$ are opposite or one endpoint is already a root. By continuity, a root exists inside the interval.

At each step, the midpoint divides the interval into two halves. If $f(l)$ and $f(m)$ have opposite signs, then the root lies in $[l, m]$. Otherwise, it lies in $[m, r]$.

The chosen half still satisfies the sign-change condition, so the invariant is preserved.

When the interval length is at most $\varepsilon$, any point inside it approximates a root within that tolerance.

## Complexity

Let the initial interval length be:

$$
L = r - l
$$

After $k$ iterations, the interval length is:

$$
\frac{L}{2^k}
$$

To make this at most $\varepsilon$:

$$
k \ge \log_2 \frac{L}{\varepsilon}
$$

| cost                 |                        value |
| -------------------- | ---------------------------: |
| iterations           | $O(\log((r-l)/\varepsilon))$ |
| function evaluations | $O(\log((r-l)/\varepsilon))$ |
| space                |                       $O(1)$ |

## When to Use

Use the bisection method when:

* the function is continuous
* the interval endpoints have opposite signs
* guaranteed convergence matters more than speed
* derivative information is unavailable or unreliable

## Comparison

| method        | requires derivative | convergence                 |
| ------------- | ------------------- | --------------------------- |
| bisection     | no                  | guaranteed with sign change |
| Newton method | yes                 | fast but may fail           |
| secant method | no                  | faster but less robust      |

## Notes

* Bisection is robust but slower than Newton-style methods.
* It finds one root in the interval, not necessarily all roots.
* If the function touches zero without changing sign, bisection may not detect it from endpoint signs.
* Floating point comparisons to zero should usually use a tolerance.

## Implementation

```id="py-bisect"
def bisection_method(f, l, r, eps=1e-9):
    fl = f(l)
    fr = f(r)

    if fl == 0:
        return l
    if fr == 0:
        return r
    if fl * fr > 0:
        raise ValueError("f(l) and f(r) must have opposite signs")

    while r - l > eps:
        m = (l + r) / 2
        fm = f(m)

        if fm == 0:
            return m

        if fl * fm <= 0:
            r = m
            fr = fm
        else:
            l = m
            fl = fm

    return (l + r) / 2
```

```id="go-bisect"
import "errors"

func BisectionMethod(f func(float64) float64, l, r, eps float64) (float64, error) {
    fl := f(l)
    fr := f(r)

    if fl == 0 {
        return l, nil
    }
    if fr == 0 {
        return r, nil
    }
    if fl*fr > 0 {
        return 0, errors.New("f(l) and f(r) must have opposite signs")
    }

    for r-l > eps {
        m := (l + r) / 2
        fm := f(m)

        if fm == 0 {
            return m, nil
        }

        if fl*fm <= 0 {
            r = m
            fr = fm
        } else {
            l = m
            fl = fm
        }
    }

    return (l + r) / 2, nil
}
```

