Numerical analysis Study Guide
Study Guide
📖 Core Concepts
Numerical analysis – study of algorithms that solve continuous‑math problems using real or complex numbers, blending approximation with symbolic work.
Direct vs. iterative methods – direct give the exact answer in a finite number of steps (e.g., Gaussian elimination); iterative produce a sequence of approximations that converge to the exact solution only in the limit (e.g., Newton’s method).
Conditioning – measures how input perturbations affect the output; ill‑conditioned problems amplify small errors (e.g., \(f(x)=\frac{1}{x-1}\) near \(x=1\)).
Discretization – replaces a continuous problem by a finite‑dimensional analogue (e.g., sampling a function at grid points for ODEs).
Error types – round‑off (finite‑precision representation), truncation (stopping an iterative process early), discretization (difference between discrete and continuous solutions).
Numerical stability – an algorithm is stable if errors introduced during computation do not grow dramatically.
---
📌 Must Remember
Direct methods → exact in infinite precision; iterative methods → converge, usually needed for large sparse systems.
Ill‑conditioned ⇒ small input change → large output change; watch the condition number of matrices.
Round‑off error is inevitable; mitigate with stable algorithms and proper scaling.
Truncation error ∝ step size (e.g., Euler’s method error \(O(h)\)).
Discretization error disappears as mesh/grid is refined (convergence order).
Stable algorithm + well‑posed problem = reliable numerical result.
---
🔄 Key Processes
Gaussian elimination (direct)
Form augmented matrix → forward elimination → back substitution.
Jacobi iteration (iterative)
Decompose \(A = D + R\); update \(x^{(k+1)} = D^{-1}(b - Rx^{(k)})\).
Newton’s method (root‑finding)
\(x{n+1}=xn-\frac{f(xn)}{f'(xn)}\); repeat until \(|f(xn)|\) below tolerance.
Euler method (ODE)
\(y{n+1}=yn + h\,f(tn,yn)\) with step size \(h\).
Simpson’s rule (numerical integration)
For interval \([a,b]\) with even \(n\): \(\displaystyle \inta^b f(x)dx \approx \frac{h}{3}\big[f0+4\sum{\text{odd}}fi+2\sum{\text{even}}fi+fn\big]\).
Simplex method (LP)
Move from vertex to adjacent vertex that improves objective until no improvement possible.
---
🔍 Key Comparisons
Direct vs. Iterative
Direct: finite steps, exact (in infinite precision), high memory cost for large matrices.
Iterative: many steps, approximate until tolerance, low memory, preferred for huge sparse systems.
Round‑off vs. Truncation Error
Round‑off: arises from finite binary representation of numbers.
Truncation: arises from stopping an infinite process early (e.g., finite Taylor series).
Interpolation vs. Extrapolation
Interpolation: estimate inside known data range → generally reliable.
Extrapolation: predict outside known range → risk of large error.
---
⚠️ Common Misunderstandings
“Iterative = inaccurate.” – With a proper convergence criterion, iterative methods can achieve machine‑precision accuracy.
“More grid points always improve a solution.” – If the algorithm is unstable, refining the mesh can amplify round‑off errors.
“Condition number only matters for linear systems.” – Conditioning affects any numerical problem, including root‑finding and integration.
---
🧠 Mental Models / Intuition
Error amplification – picture a shaky camera: a small shake (input perturbation) makes a blurry picture (output) if the lens (problem) is “ill‑conditioned”.
Iterative convergence – think of climbing a hill with diminishing steps; each iteration gets you closer, but you stop when the step size is “tiny enough”.
Discretization – like converting a smooth curve to a polyline: the finer the points, the closer the shape, but you still need a good algorithm to join the dots without wobbling.
---
🚩 Exceptions & Edge Cases
Newton’s method may diverge if the initial guess is far from the root or if \(f'(x)\) is zero or changes sign.
Gaussian elimination without partial/complete pivoting can fail on poorly scaled or nearly singular matrices (growth of round‑off error).
Simpson’s rule requires an even number of subintervals; using an odd number forces fallback to a lower‑order rule.
---
📍 When to Use Which
Large sparse linear system → choose an iterative method (Jacobi, Gauss–Seidel, CG).
Small dense system → direct method (Gaussian elimination, LU, QR).
Non‑linear root with known derivative → Newton’s method (fast quadratic convergence).
Root without derivative → bisection (guaranteed convergence) or secant method.
Smooth integrand, low dimension → Newton‑Cotes (midpoint, Simpson).
High‑dimensional integral → Monte‑Carlo or quasi‑Monte‑Carlo.
ODE with stiff dynamics → implicit methods (e.g., backward Euler) rather than explicit Euler.
---
👀 Patterns to Recognize
Matrix symmetry + positive‑definiteness → favor Cholesky decomposition (direct) or Conjugate Gradient (iterative).
Rapid error growth after a few steps → suspect instability or poor conditioning.
Residual plateaus in iterative solvers → may need preconditioning or a different method.
Oscillatory error in ODE integration → step size too large; reduce \(h\).
---
🗂️ Exam Traps
Choosing Simpson’s rule with an odd number of subintervals – answer will be marked wrong; remember the even‑\(n\) requirement.
Confusing ill‑conditioned with unstable – a well‑posed problem can still be ill‑conditioned; stability refers to the algorithm, not the problem.
Selecting direct method for a massive sparse matrix – memory/time blow‑up; the test likely expects an iterative approach.
Assuming Newton’s method always converges quadratically – only true near the root and when \(f'\) is non‑zero.
Mixing up interpolation and extrapolation – extrapolation answers are less reliable; exam may penalize over‑confidence.
---
or
Or, immediately create your own study flashcards:
Upload a PDF.
Master Study Materials.
Master Study Materials.
Start learning in seconds
Drop your PDFs here or
or