Skip to content

1.8 Big O Notation

Big O notation provides a formal way to describe how a function grows.

Big O notation provides a formal way to describe how a function grows. In algorithms, it is used to express upper bounds on time and space complexity. The notation suppresses constant factors and lower order terms so that the dominant growth rate is clear.

Problem

You have derived a function that counts operations, such as (3n^2 + 5n + 7), and you need a standard way to express its growth.

Solution

Use Big O notation to describe an upper bound.

Formal definition:

A function f(n) is O(g(n)) if there exist constants c > 0 and n0 such that
f(n) ≤ c · g(n) for all n ≥ n0.

This means that beyond some threshold, (f(n)) grows no faster than a constant multiple of (g(n)).

Example: Quadratic Function

Let:

f(n) = 3n^2 + 5n + 7

For sufficiently large (n), the (n^2) term dominates. There exist constants (c) and (n_0) such that:

f(n) ≤ c · n^2

So:

f(n) = O(n^2)

The linear and constant terms are dropped because they do not affect the growth rate.

Example: Linear vs Logarithmic

Consider two functions:

f(n) = n
g(n) = log n

For large (n), (n) grows faster than (\log n), so:

log n = O(n)

but not the reverse.

Common Simplifications

Apply these rules:

  • Drop constants: (5n = O(n))

  • Keep dominant term: (n^2 + n = O(n^2))

  • Multiply inside: (3n \log n = O(n \log n))

  • Nested loops: (n \cdot n = O(n^2))

Tight vs Loose Bounds

Big O gives an upper bound. Sometimes you want a tighter description.

Other notations:

NotationMeaning
(O(g(n)))Upper bound
(\Omega(g(n)))Lower bound
(\Theta(g(n)))Tight bound

If an algorithm is both (O(n^2)) and (\Omega(n^2)), then it is:

Θ(n^2)

which means the bound is exact up to constant factors.

Example: Binary Search

Binary search reduces the input size by half each step. The number of steps is proportional to:

log n

So the time complexity is:

O(log n)

Combining Terms

When multiple independent parts exist:

T(n) = O(n) + O(n log n)

The dominant term is (n \log n), so:

T(n) = O(n log n)

When parameters differ:

T(n, m) = O(n + m)

Do not combine unless one dominates the other under all valid inputs.

Practical Interpretation

Big O helps answer questions such as:

  • Will this algorithm scale to large inputs?
  • Which of two algorithms is asymptotically faster?
  • Where is the bottleneck?

It does not tell you exact running time. Constant factors, cache effects, and implementation details still matter in practice.

Common Pitfalls

Using Big O without defining the input size parameter leads to confusion.

Comparing algorithms using only Big O can hide large constants.

Assuming (O(n)) is always fast is incorrect if (n) is very large or operations are expensive.

Dropping too much detail can hide important structure, especially in multi-parameter problems.

Takeaway

Big O notation describes how an algorithm scales by focusing on dominant growth rates. It provides a common language for comparing algorithms and reasoning about performance independently of hardware and implementation details.