4.1 Gaussian Elimination Revisited
4.1 Gaussian Elimination Revisited
Gaussian elimination is one of those algorithms everyone learns early—long before most people understand why it matters. On paper, it’s clean: eliminate entries below the diagonal, turn the matrix into an upper-triangular form, then back-substitute to recover the solution. Students often leave with the impression that elimination is a solved problem, a straightforward procedure that always behaves predictably.
But in numerical computing, Gaussian elimination is not a sterile classroom exercise. It’s a delicate process performed inside floating-point arithmetic, where every subtraction, pivot selection, and intermediate step has consequences. The symbolic method you learned in school and the version executed on a real machine are fundamentally different creatures.
The Hidden Fragility of Elimination
Let’s start with the version most people remember: forward elimination. You take a pivot, eliminate the entries beneath it, move to the next column, and repeat. Conceptually simple. But if your pivot—the value you divide by—is tiny, everything downstream becomes numerically unstable. A division by a small number amplifies floating-point noise. In a symbolic world, this barely registers. In floating-point arithmetic, it is catastrophic.
This is why elimination often performs beautifully on well-behaved matrices and collapses on badly scaled or nearly singular ones. Even matrices that look innocent on paper can cause numerical disaster when executed in finite precision.
For example, consider the classic Hilbert matrix. Every textbook mentions its ill-conditioning. But the real surprise is this: even though Gaussian elimination symbolically solves it perfectly, the floating-point version loses accuracy almost instantly. The algorithm is correct—but the arithmetic environment is hostile.
This is the recurring theme of numerical linear algebra:
An algorithm can be mathematically correct yet numerically useless.
Gaussian elimination, without protection, sits exactly in that danger zone.
Elimination as a Series of Transformations
To truly understand elimination numerically, we have to think of it not as a mechanical table of operations, but as a sequence of matrix transformations. Each elimination step corresponds to multiplying by a lower-triangular matrix that zeros out a column below the pivot. These transformations accumulate. Their structure determines how rounding errors propagate.
This is why some elimination paths are stable and others unstable. It’s not the arithmetic itself—it’s the sequence of transformations you force the matrix through. Multiplying by a poorly conditioned transformation magnifies errors. Multiplying by a well-behaved one keeps errors contained.
In other words, Gaussian elimination is fundamentally a question of how the matrix is transformed, not just whether the steps are correct.
The Cost of a Bad Pivot
The most infamous failure mode is selecting a pivot that is too small. Here’s what happens numerically:
- You divide by a small number ⇒ rounding error grows.
- You subtract nearly equal values ⇒ catastrophic cancellation occurs.
- You eliminate based on contaminated numbers ⇒ the entire factorization becomes inaccurate.
Symbolically, nothing is wrong. Numerically, everything is. This is why no serious implementation of Gaussian elimination uses the naïve classroom variant.
Instead, practical implementations use pivoting: swapping rows to ensure the pivot is “large enough” to avoid instability. We will explore pivoting in detail later, but the important point here is this:
Gaussian elimination without pivoting is almost never used in real software.
It is mathematically elegant but numerically brittle.
Growth Factor and Numerical Blow-Up
As elimination proceeds, intermediate values can grow far larger than the original entries of the matrix. This “growth factor” determines how much rounding error is amplified. Even if the final solution is modest, the computation performed along the way may involve massive internal numbers—numbers large enough that even tiny downward rounding errors become magnified in the final answer.
A stable elimination path keeps the growth factor small. An unstable one lets numbers explode. This is another reason pivoting is essential: it controls growth.
Understanding this helps explain why some matrices produce wildly inaccurate solutions, while others—seemingly similar—cause no problems at all. The issue is not the final solution but the internal journey the algorithm takes to get there.
Back-Substitution: Reliable but Not Innocent
Back-substitution is typically considered numerically safe. And generally, it is—compared to forward elimination, it introduces far less rounding error. But even back-substitution can go wrong if earlier steps create an upper-triangular matrix with poor scaling. If the diagonal entries vary dramatically in magnitude, tiny numbers will be divided into huge ones, magnifying error yet again.
The moral is straightforward: back-substitution inherits whatever trouble forward elimination leaves behind. If the first half of elimination is unstable, the second half cannot rescue the computation.
Why Gaussian Elimination Still Matters
Given all this fragility, you might wonder why Gaussian elimination is still foundational. Shouldn’t we avoid it entirely?
The answer is no—because with the right protections, elimination becomes one of the fastest, most powerful tools in numerical computing. The variant used in modern libraries (LAPACK, NumPy, MATLAB, CuSolver) is:
LU decomposition with partial pivoting (LU + PP)
This protected form is stable for most real-world problems and fast enough for large-scale computing. It is the default solver in many machine learning and scientific computing libraries. What breaks elimination is not the algorithm itself, but the naïve way we often learn it.
To master numerical linear algebra, we must bridge these two views:
- the classroom algorithm (symbolic, clean, exact)
- the computational algorithm (approximate, floating-point, error-aware)
This chapter is about understanding why these views diverge—and how to use the real computational version safely and confidently.
Transition to 4.2 Row Operations and Elementary Matrices
To move forward, we need to shift perspective. Instead of thinking about elimination as a table of arithmetic steps, we must view it as:
a series of structured matrix transformations.
Those transformations—row operations—can be expressed as elementary matrices. And these matrices reveal why pivoting works, why elimination produces LU, and how errors propagate through the factorization.
So before we go deeper into pivoting strategies or decomposition, let’s rebuild our foundation and explore the true structure beneath the algorithm:
4.2 Row Operations and Elementary Matrices
Shohei Shimoda
I organized and output what I have learned and know here.タグ
検索ログ
Development & Technical Consulting
Working on a new product or exploring a technical idea? We help teams with system design, architecture reviews, requirements definition, proof-of-concept development, and full implementation. Whether you need a quick technical assessment or end-to-end support, feel free to reach out.
Contact Us