Level of precision to which a Turing machine can distinguish two numbers
\(\epsilon_{M} \)
Bracketing methods
Class of methods that provide an upper and lower bound as an output
Differentiation
Forward difference
\( \Delta_{h}[f](x) = f(x+h)-f(x) \)
Backward difference
\( \nabla_{h}[f](x) = f(x)-f(x-h) \)
Central difference
\( \delta_{h}[f](x) = f(x+h/2)-f(x-h/2) \)
Numerical differentiation
\( f'(x) \approx \frac{\Delta_{h}[f](x)}{h} \)
\( f'(x) \approx \frac{\nabla {h}[f](x)}{h} \)
\( f'(x) \approx \frac{\delta_{h}[f](x)}{h} \)
Zero finding and optimization
Newton-Raphson method
Zero-finding method that updates an initial guess by evaluating the zero of the linear approximation of the function at the initial guess.
Guess the zero to be at \(x_0\)
Update the guess according to \( x_{n+1}=x_{n}-\frac{f(x_{n})}{f'(x_n)}\)
Repeat step 2 to desired accuracy
Secant method
Zero-finding method that updates an pair of initial guesses by evaluating the zero of the secant between the inital guesses.
Guess the zero to be bound by \(x_0, x_1\)
Update the bound according to \( x_{n+1}=\frac{x_{n-1}f(x_n) - x_{n}f(x_{n-1})}{f(x_n) -f(x_{n-1})}\)
Repeat step 2 to desired accuracy
Zero bracketing method
Split the initial bracket into \(n\) subintervals
If the subinterval endpoints map to two numbers of opposite signs, this subinterval contains the zero (updated bracket)
Bisection method
Bracketed algorithm reminiscent of a binary search
Form a bracket around the zero
Consider the two bisections of this bracket; the bisection containing the zero (endpoints map to numbers of opposite signs) is the updated bracket.
Repeat step 2 to desired accuracy
False position method
Bracketed variant of secant method
Guess the zero to be bound by \( \{ x_0, x_1 \}\)
Update the bound according to \( x_{n+1}=\frac{x_{n-1}f(x_n) - x_{n}f(x_{n-1})}{f(x_n) -f(x_{n-1}}\)
Consider \( \{ x_{n-1} ,x_{n+1} \} , \{ x_{n} ,x_{n+1} \} \) and consider the pair that brackets the zero
Repeat step 2 on this pair to desired accuracy
Dekker's method
Method that simultaneously employs bisection method and secant method, taking whichever is needed for convergence
Brent's method
Variant of Dekker's method that uses inverse quadratic interpolation rather than a secant interpolation
Interpolate an inverse quadratic with 3 points of the function
Check if zero of interpolation is in bracket
If so, take the zero and the two closest previous points as the new set of 3 points and repeat from step 1 to desired accuracy
If not, bisect the set of 3 points and repeat from step 1
Minima bracketing method
Split interval into \(n\) subintervals \([x_k , x_{k+1}]\)
If \(f(x_k) \gt f(x_{k+1}) \land f(x_{k+1}) \lt f(x_{k+2})\), then a local minima is bracketed by \([x_k,x_{k+2}]\)
Repeat from step 1 with new bracket to desired accuracy
First derivative test
Golden section search
Jarratt's method
Brent's minimization method
Simple coordinate search
Powell's method
Downhill simplex method
Interpolation
Interpolation
Estimating new datapoints within range of observation of existing datapoints. A datapoint set is represented as \(N =\{(x_0,y_0),(x_1,y_1),\ldots,(x_n,y_n)\} \)
Vandermonde polynomial interpolation
Given a datapoint set \(N\) there exists aunique polynomial \(P : \deg (|N|-1) \) through these points. Therefore, one can setup a linear system to solve for the coefficients of this polynomial as such
\(\mathbf{c}\) is the vector of the Vandermonde polynomial coefficients
\(\mathbf{y}\) is the vector of the codomain mappings in the nodes
Lagrange polynomial interpolation
Given a datapoint set \(N\), polynomial CRT implies the existence of the following
\( P(x) = \sum^{n}_{k=0} \ell_k (x) y_k\)
Subject to \( (x_k,y_k),(x_j,y_j) \in N \implies \ell_k (x_k) = 1, \ell_{k}(x_j)=0\)
One can calculate the \(\ell_k\) as \( \ell_k (x) = \frac{\prod_{j \in \mathbb{N}\cap[0,n]\setminus \{k\}} (x-x_j)}{ \prod_{j \in \mathbb{N}\cap[0,n]\setminus \{k\}} (x_k-x_j) } \)
Spline interpolation
Integration
Riemann sum
Trapezoidal rule
Simpson's rule
Refined trapezoidal rule
Richardson extrapolation
Romberg integration
Nested integration
Monte Carlo integration
Linear Algebra
Power method
Method that converges to the largest eigenvector given that \(\mathbf{v}_0\) is not orthogonal to the largest eigenvector and the largest eigenvalue has multiplicity 1
Method that converges to the smallest eigenvector given that \(\mathbf{v}_0\) is not orthogonal to the smallest eigenvector and the smallest eigenvalue has multiplicity 1