Not all computations are the same. Some problems ask for an exact answer, while others accept an approximation. Understanding the difference is important, because it affects how we design algorithms and how we interpret results.
Exact computation
An exact computation produces a result with no error. The answer is precise and fully correct within the rules of mathematics.
For example, adding two integers is exact:
There is no uncertainty. The result is always the same.
Another example is working with fractions:
Even though decimals like are infinite, the fraction form gives an exact answer.
Exact computation is common in algebra, number theory, and symbolic manipulation. It is useful when correctness must be guaranteed.
Approximate computation
Some quantities cannot be written exactly in a simple form. In these cases, we use approximations.
A classic example is . Its decimal expansion never ends:
We can compute more digits, but we never reach a final exact decimal form.
Another example is square roots:
The approximation is close, but not exact.
Approximate computation is widely used in numerical methods, physics, engineering, and data analysis. It allows us to work with values that are otherwise difficult or impossible to express exactly.
Choosing between exact and approximate
The choice depends on the problem.
If you are proving a theorem, you usually need exact results. If you are simulating a physical system, an approximation may be sufficient.
For example:
- Counting objects requires exact answers
- Measuring real-world quantities often uses approximations
- Solving equations may involve both approaches
Error and control
When using approximations, we must understand the error. An approximation is useful only if we know how close it is to the true value.
For example, saying
gives a rough estimate. Saying
gives more information about accuracy.
Algorithms for approximation often include a stopping rule, such as “continue until the error is smaller than a chosen tolerance.”
Example: approximating a square root
To approximate , we can use a simple iterative method:
Start with a guess, such as . Then improve it using:
Repeat this step several times. Each iteration gives a better approximation. After a few steps, the value becomes very close to .
This shows how an algorithm can produce increasingly accurate results without ever reaching an exact decimal representation.
Symbolic vs numeric thinking
Exact computation is often symbolic. We manipulate expressions like or directly.
Approximate computation is numeric. We work with decimal values and finite representations.
Both approaches are important. Symbolic methods preserve exact relationships. Numeric methods allow us to compute and estimate in practice.
A practical habit
When solving a problem, ask:
Do we need an exact answer Is an approximation acceptable How accurate must the result be
By answering these questions, we choose the right kind of computation and avoid unnecessary complexity or loss of precision.