{R}R 開発ノート


合計 12 件の記事が見つかりました。

8.1 Power Method and Inverse Iteration

A clear, practical, and intuitive explanation of the power method and inverse iteration for computing eigenvalues. Covers dominance, repeated multiplication, shifted inverse iteration, and real applications in ML, PCA, and large-scale systems. Smoothly introduces the Rayleigh quotient.
2025-10-07

4.3 Pivoting Strategies

A practical and intuitive guide to pivoting strategies in numerical linear algebra, explaining partial, complete, and scaled pivoting and why pivoting is essential for stable Gaussian elimination and reliable LU decomposition.
2025-09-20

Chapter 3 — Computation & Mathematical Systems

A clear, insightful introduction to numerical computation—covering norms, error measurement, conditioning vs stability, and the gap between mathematical algorithms and real implementations. Essential reading for anyone building AI, optimization, or scientific computing systems.
2025-09-12

2.3 Overflow, Underflow, Loss of Significance

A clear and practical guide to overflow, underflow, and loss of significance in floating-point arithmetic. Learn how numerical computations break, why these failures occur, and how they impact AI, optimization, and scientific computing.
2025-09-10

2.2 Machine Epsilon, Rounding, ULPs

A comprehensive, intuitive guide to machine epsilon, rounding behavior, and ULPs in floating-point arithmetic. Learn how precision limits shape numerical accuracy, how rounding errors arise, and why these concepts matter for AI, ML, and scientific computing.
2025-09-09

2.1 Floating-Point Numbers (IEEE 754)

A detailed, intuitive guide to floating-point numbers and the IEEE 754 standard. Learn how computers represent real numbers, why precision is limited, and how rounding, overflow, subnormals, and special values affect numerical algorithms in AI, ML, and scientific computing.
2025-09-08

Chapter 2 — The Computational Model

An introduction to the computational model behind numerical linear algebra. Explains why mathematical algorithms fail inside real computers, how floating-point arithmetic shapes computation, and why understanding precision, rounding, overflow, and memory layout is essential for AI, ML, and scientific computing.
2025-09-07

1.3 Computation & Mathematical Systems

A clear explanation of how mathematical systems behave differently inside real computers. Learn why stability, conditioning, precision limits, and computational constraints matter for AI, ML, and numerical software.
2025-09-05

1.2 Floating-Point Reality vs. Textbook Math

Floating-point numbers don’t behave like real numbers. This article explains how rounding, cancellation, and machine precision break AI systems—and why it matters.
2025-09-04

1.1 What Breaks Real AI Systems

Many AI failures come from numerical instability, not algorithms. This guide explains what actually breaks AI systems and why numerical linear algebra matters.
2025-09-03

1.0 Why Numerical Linear Algebra Matters

A deep, practical introduction to why numerical linear algebra matters in real AI, ML, and optimization systems. Learn how stability, conditioning, and floating-point behavior impact models.
2025-09-02

Numerical Linear Algebra: Understanding Matrices and Vectors Through Computation

Learn how linear algebra actually works inside real computers. A practical guide to LU, QR, SVD, stability, conditioning, and the numerical foundations behind modern AI and machine learning.
2025-09-01