5.2 Numerical Pitfalls

5.2 Numerical Pitfalls

LU decomposition is powerful and elegant, but beneath its polished exterior lies a fragile numerical ecosystem. The algorithm is not merely a sequence of row operations—it is a dance with floating-point arithmetic, and the slightest misstep can send errors cascading through the entire computation.

Many engineers learn LU in its symbolic form: rows eliminated cleanly, multipliers stored neatly, all guided by the assumption of perfect arithmetic. But real computers do not operate on exact numbers. They operate on finite representations, rounding at every step. And this fusion of LU + floating-point can produce errors that are subtle, explosive, or simply baffling.

To understand LU in practice, we must examine where things go wrong—and more importantly, why.


1. Growth Factors: The Silent Error Amplifier

One of the most important—and least discussed—concepts in LU decomposition is the growth factor. This quantity describes how much the elements in the matrix can grow during elimination.

In exact arithmetic, such growth is harmless. In floating-point arithmetic, it can be catastrophic.

Here’s the danger: even if A starts with small numbers, elimination may create large intermediate values inside U. Large numbers amplify rounding errors when they interact with tiny pivots or small values elsewhere.

A famous example is the Hilbert matrix. Even though it is perfectly well-defined and invertible, LU without pivoting causes intermediate values to explode exponentially. The computed factorization becomes nearly useless.

Pivoting helps reduce growth, but cannot always eliminate it. Some matrices are intrinsically hostile.


2. Tiny Pivots: The Multiplier Bomb

A pivot close to zero creates a multiplier that is extremely large. Multipliers then appear inside L, and these large values are used repeatedly during triangular solves.

The result?

  • small rounding errors get magnified
  • solutions become unstable
  • you may get wildly incorrect results even though the algorithm “worked”

Pivoting replaces tiny pivots with more appropriate ones, but even with pivoting, numerical issues can remain—especially when the matrix is ill-conditioned.


3. Ill-Conditioning: When the Matrix Itself Is the Problem

Even a perfect LU algorithm cannot rescue a fundamentally bad matrix. If A is ill-conditioned, then:

  • small perturbations in A produce huge changes in x
  • floating-point rounding becomes magnified
  • no amount of pivoting guarantees accuracy

This is not a failure of LU. It is the nature of the underlying mathematical problem. LU merely reveals the instability that was already present.

Engineers sometimes blame the solver when the matrix itself is the issue. But when A has a large condition number, the correct interpretation is this:

The problem is unstable, not the method.


4. Loss of Significance in Forward/Backward Substitution

Even after we compute L and U, solving the system can introduce new numerical errors.

Forward substitution issues:

  • subtractions between nearly equal numbers cause catastrophic cancellation
  • large multipliers in L amplify rounding errors

Backward substitution issues:

  • dividing by small pivots adds instability
  • error propagation increases as you move up the matrix

LU is efficient, but it can be fragile when used blindly.


5. The “Works But Wrong” Problem

One of the most dangerous things about numerical LU is that it rarely crashes. It produces a result even when the result is meaningless.

The red flags:

  • residual ||A x − b|| is unexpectedly large
  • solution changes drastically with tiny perturbations
  • intermediate values inside L or U explode in magnitude

A solver that runs without error messages does not guarantee correct computation. Silent failure is the hardest to detect—and the most damaging.


6. Stability vs Speed: LU Is Fast, but Not Always Safe

In many scientific and machine-learning workloads, LU is chosen for speed. But QR or SVD may be more stable. For example:

  • QR is more robust for least squares
  • SVD handles rank-deficiency gracefully
  • iterative solvers can outperform LU on large sparse systems

Understanding LU’s numerical traps tells you when not to use it—even if it is available and fast.


7. Putting It All Together

LU decomposition is a balancing act between:

  • the matrix’s inherent conditioning
  • the stability offered by pivoting
  • the precision limits of floating-point arithmetic

This interplay determines whether the final solution is stable or unreliable.

In practice, engineers must learn not only how to perform LU, but how to interpret it:

  • Is the pivot sequence reasonable?
  • Did the multipliers stay small?
  • Is the residual consistent with the expected precision?
  • Should we consider QR or SVD instead?

Numerical pitfalls are not merely technical details; they define whether an algorithm produces trustworthy results.


Where We Go Next

Now that we’ve examined the traps lurking inside LU decomposition, the natural question becomes: How do real numerical libraries handle all this?

Modern systems like NumPy and LAPACK implement LU expertly, with pivoting logic, optimized memory access, and carefully engineered routines that avoid the worst numerical disasters.

To understand LU as it exists in practice—not in theory—we now turn to these industrial-strength implementations.

Let’s continue with 5.3 LU in NumPy and LAPACK.

2025-09-24

Shohei Shimoda

I organized and output what I have learned and know here.