Absolute difference of approximation from true value
\(\epsilon = |\xi-x|\)
\(\xi\) is the true value
\(x\) is the numerically approximated value
Relative error
Absolute ratio of approximation from true value
\(\epsilon = |\frac{x}{\xi}|\)
\(\xi\) is the true value
\(x\) is the numerically approximated value
Machine epsilon
Level of precision to which a Turing machine can distinguish two numbers
\(\epsilon_{M} \)
Bracketing methods
Class of methods that provide an upper and lower bound as an output
Differentiation
Forward difference
\( \Delta_{h}[f](x) = f(x+h)-f(x) \)
Backward difference
\( \nabla_{h}[f](x) = f(x)-f(x-h) \)
Central difference
\( \delta_{h}[f](x) = f(x+h/2)-f(x-h/2) \)
Numerical differentiation
\( f'(x) \approx \frac{\Delta_{h}[f](x)}{h} \)
\( f'(x) \approx \frac{\nabla {h}[f](x)}{h} \)
\( f'(x) \approx \frac{\delta_{h}[f](x)}{h} \)
Zero finding and optimization
Newton-Raphson method
Zero-finding method that updates an initial guess by evaluating the zero of the linear approximation of the function at the initial guess.
Guess the zero to be at \(x_0\)
Update the guess according to \( x_{n+1}=x_{n}-\frac{f(x_{n})}{f'(x_n)}\)
Repeat step 2 to desired accuracy
Secant method
Zero-finding method that updates an pair of initial guesses by evaluating the zero of the secant between the inital guesses.
Guess the zero to be bound by \(x_0, x_1\)
Update the bound according to \( x_{n+1}=\frac{x_{n-1}f(x_n) - x_{n}f(x_{n-1})}{f(x_n) -f(x_{n-1})}\)
Repeat step 2 to desired accuracy
Zero bracketing method
Split the initial bracket into \(n\) subintervals
If the subinterval endpoints map to two numbers of opposite signs, this subinterval contains the zero (updated bracket)
Bisection method
Bracketed algorithm reminiscent of a binary search
Form a bracket around the zero
Consider the two bisections of this bracket; the bisection containing the zero (endpoints map to numbers of opposite signs) is the updated bracket.
Repeat step 2 to desired accuracy
False position method
Bracketed variant of secant method
Guess the zero to be bound by \( \{ x_0, x_1 \}\)
Update the bound according to \( x_{n+1}=\frac{x_{n-1}f(x_n) - x_{n}f(x_{n-1})}{f(x_n) -f(x_{n-1}}\)
Consider \( \{ x_{n-1} ,x_{n+1} \} , \{ x_{n} ,x_{n+1} \} \) and consider the pair that brackets the zero
Repeat step 2 on this pair to desired accuracy
Dekker's method
Method that simultaneously employs bisection method and secant method, taking whichever is needed for convergence
Brent's method
Variant of Dekker's method that uses inverse quadratic interpolation rather than a secant interpolation
Interpolate an inverse quadratic with 3 points of the function
Check if zero of interpolation is in bracket
If so, take the zero and the two closest previous points as the new set of 3 points and repeat from step 1 to desired accuracy
If not, bisect the set of 3 points and repeat from step 1
Minima bracketing method
Split interval into \(n\) subintervals \([x_k , x_{k+1}]\)
If \(f(x_k) \gt f(x_{k+1}) \land f(x_{k+1}) \lt f(x_{k+2})\), then a local minima is bracketed by \([x_k,x_{k+2}]\)
Repeat from step 1 with new bracket to desired accuracy
First derivative test
Perform root finding algorithm on first derivative
Golden section search
Method that creates two sections (possibly overlapping) of an interval such that they have equal sizes, and after each iteration the 'unique part of the section' has the same ratio to the new interval as the last iteration. Such a section is dubbed a 'golden section' due to its relation with the golden ratio.
For interval \(I=[a,b]\), create subintervals \(I_1=[a,b-\frac{b-a}{\varphi}],I_2=[a+\frac{b-a}{\varphi},b]\)
If \(f(a+\frac{b-a}{\varphi}) \lt f(b-\frac{b-a}{\varphi}) \), choose \(I_2\), else choose \(I_1\)
Jarratt's method
Brent's minimization method
Simple coordinate search
Let \(\mathbf{x}_0\) be the initial vector
For each basis element \(\mathbf{e}_i\)
\( s =\min_{t \in \mathbb{R}} f( \mathbf{x}_n+ t\mathbf{e}_i )\)
\(\mathbf{x}_{n+1}= \mathbf{x}_n + s \mathbf{e}_i \)
Repeat method with initial vector replaced with new vector until convergence reached
Powell's method
Let \(\mathbf{x}_0\) be the initial vector
For each element \(\mathbf{p}_i\)
\( s =\min_{t \in \mathbb{R}} f( \mathbf{x}_n+ t\mathbf{p}_i )\)
\(\mathbf{x}_{n+1}= \mathbf{x}_n + s \mathbf{p}_i \)
Using the new vector \(\mathbf{x}_{N}\), calculate
\( z =\min_{t \in \mathbb{R}} f( \mathbf{x}_0+ t\mathbf{x}_{N} )\)
\(\mathbf{y}= \mathbf{x}_0 + s \mathbf{x}_N \)
Let \(\mathbf{x}_0=\mathbf{0}\)
Repeat method with intial vector replaced with \(\mathbf{y}\) and the \(\mathbf{p}_i\) with the largest \(\|f(\mathbf{y})-f(\mathbf{p}_i)\|\) replaced with \(\mathbf{x}_0-\mathbf{x}_N\) until convergence reached
Downhill simplex method (Nelder-Mead method)
Interpolation
Interpolation
Estimating new datapoints within range of observation of existing datapoints. A datapoint set is represented as \(N =\{(x_0,y_0),(x_1,y_1),\ldots,(x_n,y_n)\} \)
Vandermonde polynomial interpolation
Given a datapoint set \(N\) there exists aunique polynomial \(P : \deg (|N|-1) \) through these points. Therefore, one can setup a linear system to solve for the coefficients of this polynomial as such
\(\mathbf{c}\) is the vector of the Vandermonde polynomial coefficients
\(\mathbf{y}\) is the vector of the codomain mappings in the nodes
Lagrange polynomial interpolation
Given a datapoint set \(N\), polynomial CRT implies the existence of the following
\( P(x) = \sum^{n}_{k=0} \ell_k (x) y_k\)
Subject to \( (x_k,y_k),(x_j,y_j) \in N \implies \ell_k (x_k) = 1, \ell_{k}(x_j)=0\)
One can calculate the \(\ell_k\) as \( \ell_k (x) = \frac{\prod_{j \in \mathbb{N}\cap[0,n]\setminus \{k\}} (x-x_j)}{ \prod_{j \in \mathbb{N}\cap[0,n]\setminus \{k\}} (x_k-x_j) } \)
Cubic spline interpolation
Consider segments of discrete points
Linearly interpolate each segment (Lagrange interpolation)
add 2 cubic functions such that
equals 0 at both endpoints
second derivative equals 0 at one of the endpoints
equate first derivative of point on both its segments
solve linear system
Integration
Riemann sum
Method that approximates an integral by creating an interval partiton and forming rectangles with height of the function at each interval's midpoint
Trapezoidal rule
Method that approximates an integral by creating an interval partiton and forming trapezoids with height matching the function at the endpoints of the interval
Simpson's rule
Method that approximates an integral by creating an interval partiton, interpolating a quadratic to its endpoints and midpoint (Lagrange) and integrating each interval analyticaly
Method that converges to the largest eigenvector given that \(\mathbf{v}_0\) is not orthogonal to the largest eigenvector and the largest eigenvalue has multiplicity 1
Method that converges to the smallest eigenvector given that \(\mathbf{v}_0\) is not orthogonal to the smallest eigenvector and the smallest eigenvalue has multiplicity 1
Method that uses QR decomposition to converge to the diagonalization of \(\mathbf{A}\)
Initialize \(\mathbf{A}_{0} = \mathbf{A}\)
Update the diagonalization estimate by applying \(\mathbf{A}_{n+1} = \mathbf{Q}^{\top}_{n}\mathbf{A}_{n}\mathbf{Q}_n\)
\(\mathbf{A}_n = \mathbf{Q}_n \mathbf{R}_n\) is the QR decomposition
Repeat step 2 to desired accuracy
Sparse matrix
Matrix with a 'large' amount of zeroes. Such matrixes may have a pattern in which nonzero elements appear, and hence may be stored in a model of computation with a more memory-efficient data structure than a two-dimensional array. Matrixes that aren't sparse are dense
List of lists (LIL)
Coordinate list (COO)
Compressed sparse column (CSC)
Compressed sparse row (CSR)
Dictionary of keys (DOK)
Arnoldi iteration
Differential equations
Euler's method
Method to approximate IVPs \(y'=f(x,y(x)), y(x_0)=y_0 \) by incrementing from \(x_0\) using the first order Taylor series
Start at \(x=x_0 ,y=y_0\)
\(k_1 = f(x,y)\)
\(y(x+h) \approx y(x) + hk_1 \)
Repeat step 3 using \(x \to x+h\) until desired domain element is reached
Midpoint method
Variant of Euler's method that evolves by taking the tangent from midpoint between increments
Start at \(x=x_0 ,y=y_0\)
\(k_1 = f(x,y)\)
\(k_2 = f(x+\frac{h}{2} , y+k_1 \frac{h}{2}) \)
\(f(x+h) \approx y(x_0) + hk_2 \)
Repeat step 3 using \(x \to x+h\) until desired domain element is reached
Runge-Kutta method
Variant of Euler's method that considers the weighted sum of tangents taken from various places with respect to the increment