2.2 Machine Epsilon, Rounding, ULPs
2.2 Machine Epsilon, Rounding, ULPs
In floating-point arithmetic, not every number you write is a number the machine can store. Between any two representable values, there is a gap—sometimes tiny, sometimes enormous. Understanding the size of this gap is one of the keys to writing numerical software that behaves predictably.
In this section, we explore three of the most important concepts in numerical computing: machine epsilon, rounding, and ULPs. Together they describe the “resolution” of floating-point arithmetic— the smallest changes a machine can actually see.
What Is Machine Epsilon?
Machine epsilon (often written as ε or eps) is the smallest number you can add to 1.0 such that the result is distinguishable from 1.0 in floating-point arithmetic.
Formally:
machine epsilon = smallest ε such that (1 + ε) ≠ 1 in floating-point
For float32:
ε ≈ 1.1920929 × 10⁻⁷
For float64:
ε ≈ 2.220446049250313 × 10⁻¹⁶
This means float32 has about 7 decimal digits of precision, and float64 has about 16. Any difference smaller than ε near 1.0 is invisible to the machine.
Why does epsilon matter? Because it defines the limit of meaningful information you can expect from a computation. If an algorithm requires precision beyond ε, it will inevitably fail.
Epsilon Depends on Magnitude
Machine epsilon is defined at 1.0, but the spacing between representable numbers is not constant. Floating-point numbers are logarithmically spaced—the larger the value, the more widely spaced the representable numbers.
For example, around 1.0 (float32):
1.0000000 1.0000001 (next representable number)
Around 10,000,000:
10000000 10000008 (next representable number)
The gap has grown enormously. The machine’s resolution coarsens as numbers grow.
This means:
- You cannot measure tiny changes in large values.
- Relative precision stays constant; absolute precision does not.
Machine epsilon tells you how much precision you get near 1.0—but the real spacing depends on the exponent of the number. This leads us to the concept that captures this more precisely: ULPs.
What Is a ULP?
A ULP (Unit in the Last Place) is the distance between two adjacent representable floating-point numbers.
For a given value x,
1 ULP(x) = spacing between x and the next representable number
ULPs describe the machine’s resolution at x. Machine epsilon is essentially the ULP at 1.0.
ULPs tell you how “sharp” the machine’s numerical vision is at different magnitudes. Near zero, ULPs are tiny. Far away from zero, ULPs become huge.
Understanding ULPs is critical for:
- error analysis (how far did rounding push us?)
- algorithm stability (does an operation amplify small errors?)
- comparison operations (“are these floats equal?”)
- gradient calculations (how small can an update be before it disappears?)
A machine cannot react to differences smaller than 1 ULP in its local region. This shapes everything from loss functions to solvers to optimizers.
Rounding: The Machine's Decision Process
Because most real numbers cannot be represented exactly, every floating-point operation typically requires a rounding step. IEEE 754 defines several rounding modes, but the default—and by far the most common—is:
Round to nearest, ties to even
Here’s what it means:
- Pick the representable number that is closest to the true result.
- If the true result is exactly halfway between two choices, pick the one whose last bit is even.
This reduces systemic bias during repeated operations. If rounding always went “toward zero” or “toward positive infinity,” numerical errors would accumulate directionally, which is dangerous in long-running computations.
With round-to-nearest-even, rounding errors behave like random noise— still problematic when amplified, but not pathological.
The Consequence: Arithmetic Is Not Exact
Performing any arithmetic in floating-point effectively becomes:
(real result) + (rounding error)
Each operation introduces a tiny error of up to 0.5 ULP. Do a billion operations and you add a billion tiny errors. Sometimes, they cancel. Sometimes, they accumulate. Sometimes, they explode.
Understanding ULPs and rounding allows engineers to design algorithms that:
- control error amplification,
- avoid catastrophic cancellation,
- use numerically stable formulations,
- predict when precision will be lost.
These ideas are foundational to everything coming later in this book: LU decomposition, QR, eigenvalue solvers, SVD, optimization—all of them rely on error-sensitive operations.
Machine Epsilon in Practice: A Simple Example
Consider checking whether 1 + 1e-8 == 1 in float32.
1e-8 is smaller than machine epsilon (≈1.19e-7)
So float32 treats:
1 + 1e-8 → 1
The difference disappears. Not “becomes very small”—it becomes literally unrepresentable.
This is how gradient updates vanish. It’s how optimizers stall. It’s how delicate numerical algorithms drift off course.
Why These Concepts Matter for Linear Algebra
All factorization algorithms—LU, QR, SVD—are sensitive to rounding. Some handle it well; others don’t.
To predict whether a decomposition will yield correct results, you must understand:
- how many bits of precision remain after each step,
- how rounding changes the structure of a matrix,
- how ULPs interact with pivoting and orthogonalization,
- how loss of precision becomes structural instability.
Numerical algorithms are not “math in pure form.” They are math under scarcity—scarcity of bits, scarcity of precision, scarcity of exactness. Machine epsilon, rounding, and ULPs tell you how severe these constraints are.
Where We Go Next
Now that you know how floating-point numbers behave locally—how fine-grained their resolution is, how rounding errors appear, and how precision varies across magnitudes— we can explore what happens when values push against the limits of representation.
In the next section, we move from local errors to global failures:
- overflow — the value is too large to store,
- underflow — the value is too small to store,
- loss of significance — subtraction destroys meaningful digits.
These are among the most dangerous pitfalls in numerical computing, and every engineer working with AI, simulation, or optimization eventually faces them. Let’s step into 2.3 Overflow, underflow, and loss of significance.
Shohei Shimoda
I organized and output what I have learned and know here.タグ
検索ログ
Development & Technical Consulting
Working on a new product or exploring a technical idea? We help teams with system design, architecture reviews, requirements definition, proof-of-concept development, and full implementation. Whether you need a quick technical assessment or end-to-end support, feel free to reach out.
Contact Us