35006 - Numerical Methods


Fundamentals

Roundoff error

Error due to machine precison

Truncation error

Error due to algoithm precision

Absolute error

Error of the output the true value

\(\epsilon = |\xi-x|\)

Machine epsilon

Level of precision to which a Turing machine can distinguish two numbers

\(\epsilon_{M} \)

Bracketing methods

Class of methods that provide an upper and lower bound as an output

Differentiation

Forward difference

\( \Delta_{h}[f](x) = f(x+h)-f(x) \)

Backward difference

\( \nabla_{h}[f](x) = f(x)-f(x-h) \)

Central difference

\( \delta_{h}[f](x) = f(x+h/2)-f(x-h/2) \)

Numerical differentiation

\( f'(x) \approx \frac{\Delta_{h}[f](x)}{h} \)

\( f'(x) \approx \frac{\nabla {h}[f](x)}{h} \)

\( f'(x) \approx \frac{\delta_{h}[f](x)}{h} \)

Zero finding and optimization

Newton-Raphson method

Zero-finding method that updates an initial guess by evaluating the zero of the linear approximation of the function at the initial guess.

Secant method

Zero-finding method that updates an pair of initial guesses by evaluating the zero of the secant between the inital guesses.

Zero bracketing method

Bisection method

Bracketed algorithm reminiscent of a binary search

False position method

Bracketed variant of secant method

Dekker's method

Method that simultaneously employs bisection method and secant method, taking whichever is needed for convergence

Brent's method

Variant of Dekker's method that uses inverse quadratic interpolation rather than a secant interpolation

Minima bracketing method

First derivative test

Golden section search

Jarratt's method

Brent's minimization method

Simple coordinate search

Powell's method

Downhill simplex method

Interpolation

Interpolation

Estimating new datapoints
within range of observation
of existing datapoints. A datapoint set is represented as \(N =\{(x_0,y_0),(x_1,y_1),\ldots,(x_n,y_n)\} \)

Vandermonde polynomial interpolation

Given a datapoint set \(N\) there exists aunique polynomial \(P : \deg (|N|-1) \) through these points. Therefore, one can setup a linear system to solve for the coefficients of this polynomial as such

\(P(x) = \sum^{n+1}_{k=0} c_n x^k\)

\( \mathbf{V} = \begin{pmatrix} 1 & x_0 & x_{0}^2 & \ldots & x_{0}^n \\ 1 & x_1 & x_{1}^2 & \ldots & x_{1}^n \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 1 & x_n & x_{n}^2 & \ldots & x_{n}^n \end{pmatrix}\)

\( \mathbf{c} = \mathbf{V}^{-1}\mathbf{y}\)

Lagrange polynomial interpolation

Given a datapoint set \(N\), polynomial CRT implies the existence of the following

\( P(x) = \sum^{n}_{k=0} \ell_k (x) y_k\)

Subject to \( (x_k,y_k),(x_j,y_j) \in N \implies \ell_k (x_k) = 1, \ell_{k}(x_j)=0\)

One can calculate the \(\ell_k\) as \( \ell_k (x) = \frac{\prod_{j \in \mathbb{N}\cap[0,n]\setminus \{k\}} (x-x_j)}{ \prod_{j \in \mathbb{N}\cap[0,n]\setminus \{k\}} (x_k-x_j) } \)

Spline interpolation

Integration

Riemann sum

Trapezoidal rule

Simpson's rule

Refined trapezoidal rule

Richardson extrapolation

Romberg integration

Nested integration

Monte Carlo integration

Linear Algebra

Power method

Method that converges to the largest eigenvector given that \(\mathbf{v}_0\) is not orthogonal to the largest eigenvector and the largest eigenvalue has multiplicity 1

Inverse power method

Method that converges to the smallest eigenvector given that \(\mathbf{v}_0\) is not orthogonal to the smallest eigenvector and the smallest eigenvalue has multiplicity 1

QR iteration

Method that uses QR decomposition to converge to the diagonalization of \(\mathbf{A}\)

Sparse matrix

Arnoldi iteration

Differential equations

Euler's method

Method to solve \(y'=f(x,y(x)), y(x_0)=y_0 \) by incrementing from \(x_0\) using the first order Taylor series

Start at \(x=x_0 ,y=y_0\)

\(y(x+h) \approx y(x) + hf(x,y) \)

Repeat step 3 using \(x \to x+h\) until desired domain element is reached

Midpoint method

Variant of Euler's method that evolves by taking the tangent from midpoint betweendomain increments

Start at \(x=x_0 ,y=y_0\)

\(k_1 = f(x,y)\)

\(k_2 = f(x+h/2 , y+hk_1 / 2) \)

\(f(x+h) \approx y(x_0) + hk_2 \)

Repeat step 3 using \(x \to x+h\) until desired domain element is reached

Runge-Kutta method

Variant of Euler's method that considers the weighted sum of tangents taken from various places with respect to the increment

Linear systems of ODEs

Shooting method

Fourier transform