Chapter 3 — Computation & Mathematical Systems
Chapter 3 — Computation & Mathematical Systems
By this point in the book, we’ve explored the fragile world beneath numerical computation: floating-point behavior, rounding, machine epsilon, overflow, underflow, and the way vectors and matrices are stored in memory. These are the rules of the universe in which numerical algorithms must live.
Now we shift our attention from the world itself to the objects we manipulate inside it: vectors, matrices, and the mathematical systems built from them. These are the structures that define modern computation—every optimization problem, every neural network, every recommendation engine, every physical simulation, every control system, every graphics pipeline, every search algorithm.
Yet for many people, vectors and matrices remain abstract symbols sitting in notebooks or slide decks. They appear clean and perfect. A matrix is simply a collection of coefficients; a vector is a list of values. A computation is a sequence of algebraic steps.
Real systems tell a different story.
In real code, vectors drift. Matrices distort. Errors accumulate slowly or explode suddenly. Two mathematically identical operations produce different results depending on the order of steps, the shape of the memory layout, or a small rounding event that happened ten thousand iterations earlier. And algorithms that behave well on paper collapse the moment they meet a poorly conditioned input or a numerically unstable implementation.
Why We Need This Chapter
Chapter 3 is where we begin to build a bridge between pure mathematics and actual computation. We examine the core questions that determine whether a system behaves predictably or fails without warning:
- How do we measure the size of a vector or matrix? (Norms)
- How do we measure errors? (Absolute vs relative error)
- When is a problem itself unstable? (Conditioning)
- When does an algorithm introduce instability? (Stability)
- Why do textbook algorithms sometimes fail when implemented?
These are not academic questions. They are the difference between:
- a model that converges vs a model that diverges,
- a simulation that behaves vs one that breaks,
- a solver that finds answers vs one that produces NaNs,
- a pipeline that scales vs one that collapses at larger sizes.
Engineers who understand norms, error propagation, conditioning, and stability can diagnose numerical issues more quickly than they can debug code. These ideas become an instinct—an intuition that guides design choices, algorithm selection, data preprocessing, and debugging strategy.
3.1 Norms, Errors, and the Shape of Computation
We begin with norms. Many people treat norms as simple measuring tools: the length of a vector, the magnitude of a matrix. But in computation, norms do far more than measure size—they define what “size” means for a system. They control how we detect anomalies, how we evaluate error, how we quantify near-singularity, and how we judge the effect of tiny perturbations.
Choose the wrong norm, and your system may appear well-behaved when it is dangerously unstable. Choose the right norm, and hidden patterns and failure modes become visible immediately.
Once norms are understood, we can ask a deeper question: What does it mean for a computation to be right or wrong? That leads directly to error measurement—absolute error, relative error, forward error, backward error—and why numerical analysts think about error in ways that differ from data scientists or engineers.
3.2 Conditioning vs Stability
The next part of the chapter contrasts two concepts that are often confused: conditioning and stability.
Conditioning answers a structural question:
How sensitive is the problem itself to small input changes?
Even the most perfect algorithm cannot fix an ill-conditioned problem. Trying to solve such a problem without understanding its structure is the numerical equivalent of building a skyscraper on mud.
Stability answers an algorithmic question:
How much error does the algorithm introduce as it runs?
An unstable algorithm can destroy even a perfectly conditioned problem through amplification of rounding errors, poor pivoting choices, or operations that magnify small perturbations.
The interaction between problem and algorithm determines whether a computation will succeed or fail. This chapter is where we learn to diagnose that interaction accurately.
3.3 Exact Algorithms vs Implemented Algorithms
Finally, we explore one of the deepest—and most surprising—insights in numerical computation:
Exact algorithms are fictional.
On paper, algorithms operate on real numbers with infinite precision. In reality, we implement them using floating-point arithmetic with finite precision, strict memory layout, and non-deterministic ordering in parallel hardware.
This gap between “exact” and “implemented” explains many mysteries:
- Why Gaussian elimination is reliable only with pivoting.
- Why certain matrix inversions should never be computed directly.
- Why eigenvalue algorithms differ drastically between textbooks and libraries.
- Why SVD is stable but PCA can be unstable.
- Why two mathematically identical formulas produce different numerical results.
In this chapter, we confront the reality that implemented algorithms are not their mathematical definitions—and understanding this gap is what gives engineers the power to prevent subtle numerical disasters.
The Road Ahead
Chapter 3 is the heart of the book. It’s where mathematical ideas meet computational constraints, and where we learn to see numerical systems as living structures shaped by perturbations, layout, and precision.
We will not just learn definitions—we will understand the behavior of vectors and matrices inside real machines. By the end of this chapter, you’ll be able to look at any numerical system and immediately identify where its weak points are, why errors grow, and how to choose methods that remain stable under pressure.
Let’s begin with the foundation shared by all numerical computation: how we measure size and shape. This brings us naturally into the next section—an exploration of norms, why they matter, and how they secretly govern nearly everything that happens inside numerical algorithms.
Next: 3.1 Norms and Why They Matter
Shohei Shimoda
I organized and output what I have learned and know here.タグ
検索ログ
Development & Technical Consulting
Working on a new product or exploring a technical idea? We help teams with system design, architecture reviews, requirements definition, proof-of-concept development, and full implementation. Whether you need a quick technical assessment or end-to-end support, feel free to reach out.
Contact Us