Chapter 2 — The Computational Model

Chapter 2 — The Computational Model

In the previous chapter, we explored why numerical linear algebra is the invisible engine behind modern computation. But before we solve even a single system or factor a single matrix, there is a deeper truth we must confront:

The mathematics we learned is not the mathematics computers perform.

This gap is not a matter of style or notation. It is not an academic subtlety. It is a fundamental shift in the nature of numbers themselves.


A World That Looks Like Math but Isn't

When you first learn mathematics, you inherit a world of perfect objects:

  • Real numbers extend smoothly in every direction.
  • Precision is infinite.
  • Subtraction is always safe.
  • Multiplication is always exact.
  • No number is ever “too large” or “too small.”

This idealized world is beautiful. Clean. Unbreakable. It is also completely incompatible with the way actual machines operate.

Computers do not see numbers the way we do. They store them as patterns of bits — finite, discrete, limited. Every arithmetic operation must fit within these constraints, even if that means quietly discarding information or producing results that drift away from truth.

Most of the time, this gap goes unnoticed. The system works. The results look reasonable. And because nothing crashes, we assume everything is fine.

Until one day it isn’t.


The First Time You See the Machine Push Back

If you build numerical or AI systems long enough, you eventually encounter a moment that feels almost supernatural. A model diverges for no visible reason. A simulation blows up. A solver refuses to converge. A stable system suddenly becomes unstable.

You check gradients. You check shapes. You check data formats. You check initialization. You even check whether you forgot a minus sign somewhere.

And then you discover the truth: A number silently underflowed to zero. Or overflowed to infinity. Or lost half its precision in a subtraction. Or rounded the wrong way. Or accumulated errors until the result collapsed under its own weight.

The machine was obeying its rules — not yours.

It’s in these moments that the computational model goes from an abstract idea to a very real adversary. And once you’ve fought this battle a few times, you begin to see something important:

Mathematical correctness does not guarantee computational correctness.

A “correct” algorithm on paper can be a terrible algorithm in practice. A stable theory can lead to an unstable implementation. A numerically fragile operation can undo hours of your work in a fraction of a second.


Why This Chapter Exists

Numerical linear algebra is often taught as if real numbers were infinite and exact. But the systems we build — the models we train, the algorithms we implement, the simulations we run — depend on how numbers behave in the machine.

Before we talk about LU or QR or SVD… Before we discuss eigenvalues or condition numbers… Before we diagnose why a solver diverges… We must understand the ground beneath our feet:

How computers represent numbers.
How they approximate them.
How they round them.
How they lose them.

This chapter is the foundation upon which every other chapter stands. It is the map of the territory we will travel — the physics of numerical computation.


When “Small Errors” Aren’t Small

In most areas of software engineering, small errors remain small. In numerical computation, small errors can become catastrophic.

Add two numbers of vastly different magnitudes? You risk losing all meaningful digits of the smaller one.

Subtract two nearly equal numbers? You amplify tiny rounding errors until they become giant distortions.

Multiply unstable matrices? Each operation magnifies microscopic inaccuracies until the entire result collapses.

This is why:

  • linear regressions sometimes produce nonsense coefficients,
  • neural networks diverge even with “correct” hyperparameters,
  • eigenvalue solvers explode out of nowhere,
  • LLM embedding pipelines yield inconsistent rankings,
  • PCA results flip signs or misalign under minimal noise,
  • simulation trajectories wander off course.

These aren’t bugs in your code. They’re consequences of the world your code lives in.


What We Will Explore in This Chapter

This chapter is your guided tour through the machinery that makes (and breaks) numeric computation:

  • Floating-point numbers (IEEE 754) — the most important standard in numerical computing.
  • Machine epsilon, rounding, ULPs — the invisible boundaries of precision.
  • Overflow and underflow — what happens when numbers exceed their habitat.
  • Loss of significance — the silent destroyer of meaningful digits.
  • Vector and matrix memory layout — how arrangement influences speed and stability.

These topics may sound technical, but they shape everything we do in linear algebra:

  • why LU sometimes fails without pivoting,
  • why QR is more stable for least squares,
  • why SVD is the gold standard,
  • why conditioning determines the fate of algorithms,
  • why iterative methods sometimes succeed where direct solvers fail.

Understanding these mechanics will not only make you a better engineer — it will make you a better thinker. You will see the world behind the world.


The Real Goal of This Chapter

By the time you finish this chapter, floating-point arithmetic will no longer feel mysterious or unpredictable. Instead, it will feel like a familiar ecosystem, governed by consistent rules:

  • what operations are safe,
  • what operations are dangerous,
  • how errors accumulate,
  • how to build stable solutions from unstable primitives.

This clarity transforms how you design algorithms, debug numerical issues, choose factorizations, reason about machine learning behavior, and structure computations at scale.


Now, Let’s Step Inside the Machine

To understand numerical linear algebra as it truly is — not as it appears on paper — we must begin at the smallest possible unit:

How a computer stores a single number.

In the next section, we dive into 2.1 Floating-point numbers (IEEE 754), the universal standard that defines what “number” means to a computer.

2025-09-07

Shohei Shimoda

I organized and output what I have learned and know here.