4.4 When Elimination Fails

4.4 When Elimination Fails

Gaussian elimination is one of the most important and reliable numerical tools we have. Yet, despite its elegance and ubiquity, there are times when it breaks. Not because the method is flawed, but because the matrix itself resists the process of elimination.

In this section, we explore the circumstances under which elimination fails or becomes dangerously unreliable. More importantly, we look at why these failures happen—not just algebraically, but numerically, structurally, and even philosophically.


1. Zero Pivots and Near-Zero Pivots

The most obvious form of failure is encountering a pivot equal to zero. In exact mathematics, this is not a crisis. We simply swap rows—an action that preserves the solution.

But in floating-point arithmetic, problems rarely present themselves so cleanly. More common is the near-zero pivot:

pivot ≈ 1e-16

This value might not be literally zero, but dividing by it is equivalent to dividing by noise. The resulting multipliers explode, errors accumulate, and soon the matrix becomes a numerical minefield.

Even pivoting cannot always save us. Sometimes an entire column is filled with tiny numbers, and partial pivoting simply chooses the “least bad” option—still tiny, still unstable.


2. Ill-Conditioned Matrices

Some matrices fail elimination not because of any single pivot, but because the entire problem is ill-conditioned.

An ill-conditioned matrix is one in which:

  • a tiny perturbation in the input causes a huge change in the output
  • rows or columns are nearly dependent
  • information is smeared, redundant, or fragile

These matrices are mathematically solvable, but numerically treacherous. Even if elimination completes, the answer may not resemble the true solution.

Elimination didn’t fail—the problem did.


3. Catastrophic Cancellation

Sometimes elimination produces intermediate numbers that look legitimate—but are actually the difference of two nearly equal quantities. This is catastrophic cancellation:

1.00000000000012 - 1.00000000000008 = 0.00000000000004

The true result is small, but the way we computed it has discarded many digits of precision. Downstream operations amplify this error, and the system quietly falls apart.

Cancellation doesn’t stop elimination from continuing—but it corrupts the intermediate values so deeply that the final solution may be unusable.


4. Round-Off Error Accumulation

Gaussian elimination performs a large sequence of arithmetic operations—subtractions, multiplications, and divisions. Each step introduces tiny round-off errors.

On stable matrices, these errors remain harmless. But on unstable ones, they accumulate exponentially.

The algorithm finishes successfully, yet the answer wobbles far from reality.

This quiet failure is the most dangerous: the algorithm reports success, the code returns a result, but the result is wrong—sometimes catastrophically wrong.


5. Structural Failure: Singular and Nearly Singular Matrices

Some matrices cannot be solved by elimination because they are structurally singular:

  • one row is a linear combination of others
  • one column contains no information
  • rank deficiency makes Ax = b unsolvable or non-unique

Elimination attempts to divide by zero at key steps because the matrix has no invertible structure. In floating-point arithmetic, these structural defects appear as:

  • highly unstable pivots
  • exploding multipliers
  • large condition numbers

When elimination fails in this way, it’s not numerical—it’s mathematical.


6. Sparsity Destruction

In large-scale computation, the structure of the matrix is often as important as the numbers themselves. Sparse matrices—matrices with many zeros—are common in:

  • graphs
  • differential equations
  • machine learning

Elimination tends to destroy sparsity, creating fill-in elements. If pivots appear in the wrong places, the entire matrix can become dense, dramatically increasing computational cost and memory usage.

This is not a numerical failure but a practical one. An algorithm that theoretically works may become unusable at scale.


7. When the Right Method Is the Wrong Method

Sometimes elimination “fails” because it shouldn’t have been used in the first place. Iterative methods such as CG, GMRES, or BiCGSTAB are far better suited for:

  • very large sparse systems
  • matrices with special structure
  • problems arising from PDEs or optimization

Direct elimination is a powerful tool, but not an all-purpose one.


Elimination Fails — But Learning From Failure Leads to Better Tools

Understanding the failure modes of elimination is not an academic exercise. Every major advancement in numerical linear algebra—from pivoting to iterative methods to factorization strategies—was born from understanding why elimination breaks.

These failures tell us:

  • how algorithms behave inside floating-point arithmetic
  • why stability matters as much as correctness
  • when we must switch to more reliable tools

And this leads naturally to the next topic: LU decomposition.


Transition to Chapter 5: LU Decomposition

Gaussian elimination is powerful, but in practice we almost never use it directly. Instead, we reorganize elimination into a structured, reusable form known as LU decomposition.

LU turns elimination into a factorization that can:

  • solve many right-hand sides efficiently
  • integrate naturally with pivoting
  • serve as a foundation for advanced solvers

Now that we understand how elimination works—and how it fails—we’re ready to explore how LU decomposition builds on its strengths while avoiding many of its pitfalls.

Next: 5.0 LU Decomposition — Why Factorization Matters

2025-09-21

Shohei Shimoda

I organized and output what I have learned and know here.