{R}R 開発ノート


合計 10 件の記事が見つかりました。

8.3 The QR Algorithm (High-Level Intuition)

A clear, intuitive, and comprehensive explanation of the QR algorithm—how repeated QR factorizations reveal eigenvalues, why orthogonal transformations provide stability, and how shifts and Hessenberg reductions make the method efficient. Ends with a smooth bridge to PCA and spectral methods.
2025-10-09

8.2 Rayleigh Quotient

An intuitive and comprehensive explanation of the Rayleigh quotient, why it estimates eigenvalues so accurately, how it connects to the power method and inverse iteration, and why it forms the foundation of modern eigenvalue algorithms. Ends with a natural transition to the QR algorithm.
2025-10-08

8.1 Power Method and Inverse Iteration

A clear, practical, and intuitive explanation of the power method and inverse iteration for computing eigenvalues. Covers dominance, repeated multiplication, shifted inverse iteration, and real applications in ML, PCA, and large-scale systems. Smoothly introduces the Rayleigh quotient.
2025-10-07

Chapter 8 — Eigenvalues and Eigenvectors

A deep, intuitive introduction to eigenvalues and eigenvectors for engineers and practitioners. Explains why spectral methods matter, where they appear in real systems, and how modern numerical algorithms compute eigenvalues efficiently. Leads naturally into the power method and inverse iteration.
2025-10-06

6.3 Applications in ML, Statistics, and Kernel Methods

A deep, intuitive explanation of how Cholesky decomposition powers real machine learning and statistical systems—from Gaussian processes and Bayesian inference to kernel methods, Kalman filters, covariance modeling, and quadratic optimization. Understand why Cholesky is essential for stability, speed, and large-scale computation.
2025-09-30

6.2 Memory Advantages

A detailed, intuitive explanation of why Cholesky decomposition uses half the memory of LU decomposition, how memory locality accelerates computation, and why this efficiency makes Cholesky essential for large-scale machine learning, kernel methods, and statistical modeling.
2025-09-29

Chapter 6 — Cholesky Decomposition

A deep, narrative-driven introduction to Cholesky decomposition explaining why symmetric positive definite matrices dominate real computation. Covers structure, stability, performance, and the role of Cholesky in ML, statistics, and optimization.
2025-09-27

3.4 Exact Algorithms vs Implemented Algorithms

Learn why textbook algorithms differ from the versions that actually run on computers. This chapter explains rounding, floating-point errors, instability, algorithmic reformulation, and why mathematically equivalent methods behave differently in AI, ML, and scientific computing.
2025-09-16

Chapter 2 — The Computational Model

An introduction to the computational model behind numerical linear algebra. Explains why mathematical algorithms fail inside real computers, how floating-point arithmetic shapes computation, and why understanding precision, rounding, overflow, and memory layout is essential for AI, ML, and scientific computing.
2025-09-07

1.2 Floating-Point Reality vs. Textbook Math

Floating-point numbers don’t behave like real numbers. This article explains how rounding, cancellation, and machine precision break AI systems—and why it matters.
2025-09-04