35006 - Numerical Methods


Fundamentals

Error

Roundoff error

Error due to machine precison

Truncation error

Error due to algoithm precision

Absolute error

Absolute difference of approximation from true value

\(\epsilon = |\xi-x|\)

Relative error

Absolute ratio of approximation from true value

\(\epsilon = |\frac{x}{\xi}|\)

Machine epsilon

Level of precision to which a Turing machine can distinguish two numbers

\(\epsilon_{M} \)

Bracketing methods

Class of methods that provide an upper and lower bound as an output

Differentiation

Forward difference

\( \Delta_{h}[f](x) = f(x+h)-f(x) \)

Backward difference

\( \nabla_{h}[f](x) = f(x)-f(x-h) \)

Central difference

\( \delta_{h}[f](x) = f(x+h/2)-f(x-h/2) \)

Numerical differentiation

\( f'(x) \approx \frac{\Delta_{h}[f](x)}{h} \)

\( f'(x) \approx \frac{\nabla {h}[f](x)}{h} \)

\( f'(x) \approx \frac{\delta_{h}[f](x)}{h} \)

Zero finding and optimization

Newton-Raphson method

Zero-finding method that updates an initial guess by evaluating the zero of the linear approximation of the function at the initial guess.

Secant method

Zero-finding method that updates an pair of initial guesses by evaluating the zero of the secant between the inital guesses.

Zero bracketing method

Bisection method

Bracketed algorithm reminiscent of a binary search

False position method

Bracketed variant of secant method

Dekker's method

Method that simultaneously employs bisection method and secant method, taking whichever is needed for convergence

Brent's method

Variant of Dekker's method that uses inverse quadratic interpolation rather than a secant interpolation

Minima bracketing method

First derivative test

Golden section search

Method that creates two sections (possibly overlapping) of an interval such that they have equal sizes, and after each iteration the 'unique part of the section' has the same ratio to the new interval as the last iteration. Such a section is dubbed a 'golden section' due to its relation with the golden ratio.

Jarratt's method

Brent's minimization method

Simple coordinate search

Powell's method

Downhill simplex method (Nelder-Mead method)

Interpolation

Interpolation

Estimating new datapoints
within range of observation
of existing datapoints. A datapoint set is represented as \(N =\{(x_0,y_0),(x_1,y_1),\ldots,(x_n,y_n)\} \)

Vandermonde polynomial interpolation

Given a datapoint set \(N\) there exists aunique polynomial \(P : \deg (|N|-1) \) through these points. Therefore, one can setup a linear system to solve for the coefficients of this polynomial as such

\(P(x) = \sum^{n+1}_{k=0} c_n x^k\)

\( \mathbf{V} = \begin{pmatrix} 1 & x_0 & x_{0}^2 & \ldots & x_{0}^n \\ 1 & x_1 & x_{1}^2 & \ldots & x_{1}^n \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 1 & x_n & x_{n}^2 & \ldots & x_{n}^n \end{pmatrix}\)

\( \mathbf{c} = \mathbf{V}^{-1}\mathbf{y}\)

Lagrange polynomial interpolation

Given a datapoint set \(N\), polynomial CRT implies the existence of the following

\( P(x) = \sum^{n}_{k=0} \ell_k (x) y_k\)

Subject to \( (x_k,y_k),(x_j,y_j) \in N \implies \ell_k (x_k) = 1, \ell_{k}(x_j)=0\)

One can calculate the \(\ell_k\) as \( \ell_k (x) = \frac{\prod_{j \in \mathbb{N}\cap[0,n]\setminus \{k\}} (x-x_j)}{ \prod_{j \in \mathbb{N}\cap[0,n]\setminus \{k\}} (x_k-x_j) } \)

Cubic spline interpolation

Integration

Riemann sum

Method that approximates an integral by creating an interval partiton and forming rectangles with height of the function at each interval's midpoint

Trapezoidal rule

Method that approximates an integral by creating an interval partiton and forming trapezoids with height matching the function at the endpoints of the interval

Simpson's rule

Method that approximates an integral by creating an interval partiton, interpolating a quadratic to its endpoints and midpoint (Lagrange) and integrating each interval analyticaly

Refined trapezoidal rule

Gaussian quadrature

\( \int^{b}_{a} f(x)dx = \approx \frac{b-a}{2} \sum^{n}_{i=0} w_i f(\frac{b-a}{2}u_i + \frac{a+b}{2}) \)

Nested integration

As a consequence of Fubini's theorem, one can represent multidimensional integrals as nested single dimension nintegrals.

Monte Carlo integration

\( \int^{b}_{a} f(x)dx \approx \frac{b-a}{n}\sum^{n}_{i=1} f(x_i)\)

\( \iint_{D} f(x,y)d(x,y) \approx \frac{\lambda(D)}{n}\sum^{n}_{i=1} f(x_i, y_i)\)

Linear Algebra

Power method

Method that converges to the largest eigenvector given that \(\mathbf{v}_0\) is not orthogonal to the largest eigenvector and the largest eigenvalue has multiplicity 1

Inverse power method

Method that converges to the smallest eigenvector given that \(\mathbf{v}_0\) is not orthogonal to the smallest eigenvector and the smallest eigenvalue has multiplicity 1

QR iteration

Method that uses QR decomposition to converge to the diagonalization of \(\mathbf{A}\)

Sparse matrix

Matrix with a 'large' amount of zeroes. Such matrixes may have a pattern in which nonzero elements appear, and hence may be stored in a model of computation with a more memory-efficient data structure than a two-dimensional array. Matrixes that aren't sparse are dense

Arnoldi iteration

Differential equations

Euler's method

Method to approximate IVPs \(y'=f(x,y(x)), y(x_0)=y_0 \) by incrementing from \(x_0\) using the first order Taylor series

Midpoint method

Variant of Euler's method that evolves by taking the tangent from midpoint between increments

Runge-Kutta method

Variant of Euler's method that considers the weighted sum of tangents taken from various places with respect to the increment

Linear systems of ODEs

Shooting method

Method to approximate BVPs

Fourier transform