{R}R 開発ノート


合計 17 件の記事が見つかりました。

8.1 Power Method and Inverse Iteration

A clear, practical, and intuitive explanation of the power method and inverse iteration for computing eigenvalues. Covers dominance, repeated multiplication, shifted inverse iteration, and real applications in ML, PCA, and large-scale systems. Smoothly introduces the Rayleigh quotient.
2025-10-07

7.4 Why QR Is Often Preferred

An in-depth, accessible explanation of why QR decomposition is the preferred method for solving least squares problems and ensuring numerical stability. Covers orthogonality, rank deficiency, Householder reflections, and the broader role of QR in scientific computing, with a smooth transition into eigenvalues and eigenvectors.
2025-10-05

7.3 Least Squares Problems

A clear, intuitive, book-length explanation of least squares problems, including the geometry, normal equations, QR decomposition, and SVD. Learn why least-squares solutions are central to ML and data science, and why QR provides a stable foundation for practical algorithms.
2025-10-04

7.2 Householder Reflections

A clear, intuitive, book-length explanation of Householder reflections and why they form the foundation of modern QR decomposition. Learn how reflections overcome the numerical instability of Gram–Schmidt and enable stable least-squares solutions across ML, statistics, and scientific computing.
2025-10-03

7.1 Gram–Schmidt and Modified GS

A clear, practical, book-length explanation of Gram–Schmidt and Modified Gram–Schmidt, why classical GS fails in floating-point arithmetic, how MGS improves stability, and why real numerical systems eventually rely on Householder reflections. Ideal for ML engineers, data scientists, and numerical computing practitioners.
2025-10-02

Chapter 7 — QR Decomposition

A deep, intuitive introduction to QR decomposition, explaining why orthogonality and numerical stability make QR essential for least squares, regression, kernel methods, and large-scale computation. Covers Gram–Schmidt, Modified GS, Householder reflections, and why QR is often preferred over LU and normal equations.
2025-10-01

6.3 Applications in ML, Statistics, and Kernel Methods

A deep, intuitive explanation of how Cholesky decomposition powers real machine learning and statistical systems—from Gaussian processes and Bayesian inference to kernel methods, Kalman filters, covariance modeling, and quadratic optimization. Understand why Cholesky is essential for stability, speed, and large-scale computation.
2025-09-30

5.2 Numerical Pitfalls

A deep, accessible explanation of the numerical pitfalls in LU decomposition. Learn about growth factors, tiny pivots, rounding errors, catastrophic cancellation, ill-conditioning, and why LU may silently produce incorrect results without proper pivoting and numerical care.
2025-09-24

4.4 When Elimination Fails

An in-depth, practical explanation of why Gaussian elimination fails in real numerical systems—covering zero pivots, instability, ill-conditioning, catastrophic cancellation, and singular matrices—and how these failures motivate the move to LU decomposition.
2025-09-21

4.0 Solving Ax = b

A deep, accessible introduction to solving linear systems in numerical computing. Learn why Ax = b sits at the center of AI, ML, optimization, and simulation, and explore Gaussian elimination, pivoting, row operations, and failure modes through intuitive explanations.
2025-09-17

3.4 Exact Algorithms vs Implemented Algorithms

Learn why textbook algorithms differ from the versions that actually run on computers. This chapter explains rounding, floating-point errors, instability, algorithmic reformulation, and why mathematically equivalent methods behave differently in AI, ML, and scientific computing.
2025-09-16

3.3 Conditioning of Problems vs Stability of Algorithms

Learn the critical difference between problem conditioning and algorithmic stability in numerical computing. Understand why some systems fail even with correct code, and how sensitivity, condition numbers, and numerical stability determine the reliability of AI, ML, and scientific algorithms.
2025-09-15

2.3 Overflow, Underflow, Loss of Significance

A clear and practical guide to overflow, underflow, and loss of significance in floating-point arithmetic. Learn how numerical computations break, why these failures occur, and how they impact AI, optimization, and scientific computing.
2025-09-10

Chapter 2 — The Computational Model

An introduction to the computational model behind numerical linear algebra. Explains why mathematical algorithms fail inside real computers, how floating-point arithmetic shapes computation, and why understanding precision, rounding, overflow, and memory layout is essential for AI, ML, and scientific computing.
2025-09-07

1.1 What Breaks Real AI Systems

Many AI failures come from numerical instability, not algorithms. This guide explains what actually breaks AI systems and why numerical linear algebra matters.
2025-09-03

1.0 Why Numerical Linear Algebra Matters

A deep, practical introduction to why numerical linear algebra matters in real AI, ML, and optimization systems. Learn how stability, conditioning, and floating-point behavior impact models.
2025-09-02

Numerical Linear Algebra: Understanding Matrices and Vectors Through Computation

Learn how linear algebra actually works inside real computers. A practical guide to LU, QR, SVD, stability, conditioning, and the numerical foundations behind modern AI and machine learning.
2025-09-01