Numerical Linear Algebra: Understanding Matrices and Vectors Through Computation

Audio made by Fresvia

Introduction

The first time I encountered LU decomposition, I stared at the formulas and wondered what any of it had to do with real work. It looked elegant—beautiful, even—but also distant. Something created for mathematicians, not for people building systems under deadlines.

For years, that impression stayed with me. Linear algebra felt like a subject you were supposed to learn, but never something you would actually use.

That illusion ended abruptly one night. I was deep in the middle of building an AI system—tight schedule, too much code, too little sleep. The model behaved unpredictably. Loss values exploded. Predictions drifted into nonsense. Every fix made the problem worse.

After hours of frustration, the truth surfaced: A single numerically unstable matrix operation had broken the entire system.

It wasn’t a bug in the business logic. It wasn’t a typo. It wasn’t even a conceptual mistake.

It was linear algebra—not the clean version from textbooks, but the messy, fragile version that lives inside real computers.

That night completely changed the way I looked at computation. I realized something important:

  • Linear algebra on paper is one thing.
  • Linear algebra inside a machine is something else entirely.

And understanding the second one is what separates models that work from models that fail.

From that point on, every system I touched—LLMs, optimization algorithms, simulations, recommendation models, numerical pipelines— was held together by the same foundation: numerical linear algebra. Yet most engineers never learn it.

We learn definitions, not conditioning.
We learn formulas, not stability.
We learn solutions, not solvers.

We’re taught what an algorithm is, but not how it survives inside floating-point arithmetic.

This book is my attempt to fix that. It is not a theoretical text, although we will respect the theory. It is not a pure coding book, although we will write plenty of code.

It is a book about how computation really works — about what happens when matrices meet hardware, when algorithms meet precision, when mathematics meets the limits of the real world.

You will not need to love mathematics to read this book. But you will come to appreciate how deeply and beautifully it shapes the systems we build.

By the end, my goal is simple:

  • You will understand why algorithms succeed—or fail.
  • You will know what tools to use when real systems break.
  • You will see linear algebra as a living engine powering AI, optimization, simulation, and modern computation.

If you’ve ever stared at a model wondering why it behaves irrationally, or watched a simulation spiral out of control, or debugged an inexplicable numerical error…
Then this book was written for you.

Let’s step into the world where mathematics becomes machinery — and where understanding matrices and vectors through computation unlocks the ability to build things that truly work.

Table of Contents

Part I — Foundations of Numerical Linear Algebra

1. Why Numerical Linear Algebra Matters

2. The Computational Model

3. Vectors and Matrices in Practice

Part II — Linear Systems & Factorizations

4. Solving Ax = b

5. LU Decomposition

6. Cholesky Decomposition

7. QR Decomposition

8. Eigenvalues and Eigenvectors

Part III — Advanced Topics for Real Systems

9. Singular Value Decomposition (SVD)

  • Geometric interpretation
  • Low-rank approximations
  • Noise reduction
  • SVD in embeddings and vector search

10. Numerical Stability & Conditioning

  • Stable vs unstable algorithms
  • Catastrophic cancellation
  • Condition number and its meaning
  • Failure case studies

11. Iterative Methods

  • Jacobi, Gauss–Seidel, SOR
  • Krylov methods (CG, GMRES)
  • Preconditioning
  • When iterative beats direct

12. Sparse Matrices

  • CSR, CSC, COO formats
  • Graph interpretations
  • Large-scale ML applications
  • The power of sparsity

Part IV — Applications in AI, ML, and Optimization

13. Linear Algebra Behind Machine Learning

  • Gradient descent as matrix operations
  • Normal equations vs QR vs SGD
  • Conditioning in ML problems

14. Linear Algebra in Deep Learning

  • Matrix multiplications everywhere
  • Initialization & normalization
  • Backprop as linear algebra

15. Linear Algebra in Large-Scale Systems

  • Vector search & embeddings
  • Solvers inside optimization libraries
  • Numerical aspects of RAG and LLMs

Part V — Building Reliable Numerical Software

16. Performance Considerations

  • BLAS levels
  • Cache-aware algorithms
  • GPU vs CPU
  • Parallelization basics

17. Testing Numerical Code

  • Unit testing for floating-point algorithms
  • Reproducibility
  • Randomness and seeding

18. Practical Numerical Recipes

  • When to use LU, QR, SVD
  • Handling unstable matrices
  • Common numerical error patterns
  • Debugging tips

Appendices

  • A. Linear Algebra Refresher
  • B. Python/Numpy Reference
  • C. LAPACK/BLAS Quick Guide
  • D. Glossary of Numerical Terms
  • E. Recommended Reading
2025-09-01

Shohei Shimoda

I organized and output what I have learned and know here.