Time complexity describes how the running time of an algorithm grows as the input size grows.
Time complexity describes how the running time of an algorithm grows as the input size grows. It does not try to predict the exact number of seconds on a particular machine. Instead, it gives a portable way to compare algorithms by counting how many primitive operations they perform as a function of the input.
The main size parameter is usually written as (n). For graphs, we often use two parameters: (n) for the number of vertices and (m) for the number of edges. For strings, (n) may mean string length. For matrices, the input may have dimensions (r) and (c). Always define the size parameter before giving a bound.
Problem
You have an algorithm and need to estimate how its running time changes when the input becomes larger.
Solution
Count the dominant repeated operations and express them as a function of the input size.
Use this basic method:
1. Choose the input size parameters.
2. Identify loops, recursion, and expensive operations.
3. Count how many times each operation runs.
4. Drop constant factors and lower order terms.
5. State the final asymptotic bound.For a single loop over an array:
sum(A):
total = 0
for x in A:
total = total + x
return totalThe loop runs (n) times. Each iteration performs a constant amount of work. The time complexity is:
O(n)For a nested loop:
all_pairs(A):
for i = 0 to n - 1:
for j = 0 to n - 1:
process(A[i], A[j])The outer loop runs (n) times. For each outer iteration, the inner loop also runs (n) times. The total number of calls to process is (n^2), so the time complexity is:
O(n^2)Discussion
Time complexity depends on the cost model. In RAM-style analysis, basic arithmetic, assignment, comparison, and array access are usually treated as constant time. This is a simplification, but it is useful for designing and comparing algorithms.
The bound should match the input model. If an algorithm accesses A[i], this is constant time for an array. It is not constant time for a linked list unless the node is already available. If an algorithm checks whether an edge exists, the cost depends on whether the graph is stored as an adjacency matrix, adjacency list, or hash set.
Best, Worst, and Average Cases
Some algorithms take different amounts of time on different inputs of the same size.
Worst-case time gives an upper bound over all valid inputs of size (n). It is the safest and most common guarantee.
Best-case time gives the cost on the easiest valid input. It is useful only when early exits matter.
Average-case time gives the expected cost under a probability distribution over inputs. It is useful for randomized algorithms and data structures, but only when the distribution is stated clearly.
For linear search:
contains(A, x):
for i = 0 to n - 1:
if A[i] == x:
return true
return falseThe best case is (O(1)), when x is at the first position. The worst case is (O(n)), when x is absent or appears only at the last position. The average case depends on assumptions about where x appears.
Recursion
For recursive algorithms, time complexity is often described by a recurrence.
For example, binary search halves the input at each step:
T(n) = T(n / 2) + O(1)This solves to:
O(log n)Merge sort splits the input into two halves and then merges in linear time:
T(n) = 2T(n / 2) + O(n)This solves to:
O(n log n)The recurrence should reflect both the recursive calls and the nonrecursive work done at each level.
Output-Sensitive Time
Some algorithms must produce large output. In that case, the time complexity should include output size.
For example, printing all pairs from an array of length (n) takes (O(n^2)) time because there are (n^2) ordered pairs to print. No algorithm can print them all in less than the output size.
For algorithms with variable-size output, write the bound using an output parameter such as (k):
O(n + k)Here (k) is the number of reported results.
Common Growth Rates
| Complexity | Typical Meaning |
|---|---|
| (O(1)) | Constant number of operations |
| (O(\log n)) | Repeated halving or balanced search |
| (O(n)) | Single pass over input |
| (O(n \log n)) | Sorting or divide and conquer |
| (O(n^2)) | All pairs or nested scans |
| (O(n^3)) | All triples or dense dynamic programming |
| (O(2^n)) | All subsets |
| (O(n!)) | All permutations |
Common Pitfalls
Do not count source lines. A short line can hide an expensive operation.
Do not ignore library calls. Sorting inside a loop can dominate the total cost.
Do not use (n) without defining it. For graph algorithms, (O(n)) is often meaningless unless you say whether (n) counts vertices, edges, or both.
Do not simplify too aggressively when multiple parameters matter. For graph traversal, (O(n + m)) is more informative than (O(n^2)) when the graph may be sparse.
Takeaway
Time complexity estimates the growth of running time by counting repeated primitive operations under a stated input model. Define the size parameters, count loops and recursion, include expensive operations, and report the dominant asymptotic term.