5.4 Practical Examples
5.4 Practical Examples
Theory becomes intuition only when you see it run—when you watch the numbers shift, when pivots appear, when rounding errors accumulate, and when an algorithm either stabilizes a system or collapses it. In this final section of the LU chapter, we will walk through practical, concrete examples that demonstrate LU decomposition in action using NumPy and SciPy.
Each example reveals a different side of LU: stability, pivoting, performance, and diagnostic patterns that every engineer should recognize. By the end of this section, LU decomposition will no longer be an abstract transformation, but a tool you can reason about and wield confidently.
Example 1 — A Simple LU Decomposition
We begin with a clean, well-behaved matrix. No instability. No surprises. Just LU doing exactly what it should.
import numpy as np
from scipy.linalg import lu
A = np.array([
[4, 3],
[6, 3]
], dtype=float)
P, L, U = lu(A)
print("P =\n", P)
print("L =\n", L)
print("U =\n", U)
SciPy’s lu function exposes the full P, L, U matrices for clarity.
- P shows how rows were swapped.
- L stores the multipliers used during elimination.
- U contains the upper-triangular structure.
If you multiply P @ L @ U, you recover A exactly (up to floating-point noise). This confirms the decomposition and also demonstrates how pivoting shapes the factorization.
Example 2 — Solving Ax = b Using LU
The true power of LU is not the factorization itself—it’s that it allows you to solve many right-hand sides efficiently. Once LU is computed, each new b costs only two triangular solves.
from scipy.linalg import lu_factor, lu_solve
A = np.array([
[2., 1., 1.],
[4., -6., 0.],
[-2., 7., 2.]
])
b = np.array([5., -2., 9.])
lu_piv = lu_factor(A)
x = lu_solve(lu_piv, b)
print("Solution x =", x)
Compared to computing np.linalg.inv(A) @ b, this approach is both:
- More accurate (no explicit matrix inverse)
- Much faster for repeated solves
In production systems—simulation pipelines, optimization solvers, Kalman filtering, ML model fitting—LU-based triangular solves are the workhorse.
Example 3 — LU on a Nearly Singular Matrix
Now we push LU to its limit. Consider a matrix with rows that are almost linearly dependent:
A = np.array([
[1, 1, 1],
[1, 1.0000001, 1],
[1, 1, 1.0000001]
], dtype=float)
lu_piv = lu_factor(A)
NumPy will typically produce a warning or raise an exception depending on the backend:
“Matrix is singular to working precision.”
This message is not decorative—it is the boundary marker where numerical linear algebra warns you:
“We can continue, but the results will be unreliable.”
LU decomposition does not fail because the matrix is exactly singular, but because the pivots become so small that they cannot resist rounding errors. This example highlights the importance of conditioning and pivoting, which you explored earlier in the chapter.
Example 4 — Comparing LU with and Without Pivoting
To appreciate why partial pivoting is the default everywhere, consider the famous pathological matrix:
A = np.array([
[1e-20, 1],
[1, 1]
], dtype=float)
If you attempt elimination without pivoting:
# Manual elimination (no pivot)
m = A[1, 0] / A[0, 0] # catastrophic
The multiplier becomes:
m = 1 / 1e-20 = 1e20
This creates enormous rounding errors and completely destroys numerical stability.
With pivoting, LAPACK instead swaps the two rows first. This completely avoids the problem:
- The pivot becomes
1.0instead of1e-20. - The elimination multiplier becomes
1e-20instead of1e20.
This single pivot radically improves accuracy, demonstrating why partial pivoting is the default in every real numerical library.
Example 5 — Timings and Performance
Blocked LU in LAPACK is astonishingly fast because it uses Level-3 BLAS operations that exploit cache locality and vectorization.
import numpy as np
from scipy.linalg import lu_factor
import time
for n in [200, 500, 1000, 2000]:
A = np.random.randn(n, n)
start = time.time()
lu_factor(A)
print(f"n={n}, time={time.time() - start:.3f}s")
The results typically show near-cubic scaling, but with extremely efficient constants. Even a 2000×2000 matrix can factor in well under a second on modern hardware.
This is why LU remains foundational for dense linear algebra despite its age—it maps perfectly onto the strengths of modern computer architecture.
What These Examples Teach Us
From these demonstrations, we can extract several recurring lessons:
- LU is deeply tied to stability. Pivoting is not optional—it is essential.
- LU is an algorithmic foundation. Solvers, optimizers, ML systems, and simulations rely on it.
- LU is a performance showcase. BLAS-optimized LU is one of the fastest factorizations available.
- LU is a diagnostic tool. Singular or ill-conditioned matrices immediately reveal themselves.
And perhaps most importantly:
LU decomposition is where mathematics, hardware, and engineering converge.
LU decomposition is powerful because it works for any non-singular matrix. But what if you know that your matrix is not arbitrary? What if it is symmetric or even symmetric positive definite (SPD)?
In that special case, a new world opens: faster algorithms, better stability, lower memory usage, and beautiful geometric interpretations. This leads us naturally to the next topic.
Continue to Chapter 6 — Cholesky Decomposition.
Shohei Shimoda
I organized and output what I have learned and know here.タグ
検索ログ
Development & Technical Consulting
Working on a new product or exploring a technical idea? We help teams with system design, architecture reviews, requirements definition, proof-of-concept development, and full implementation. Whether you need a quick technical assessment or end-to-end support, feel free to reach out.
Contact Us