Euclidean distance is represented by the following geometric property:
\(a^{2}+b^{2}=c^{2}\)
\(\text{Angle sum of an n-gon} = 180(n-2)\)
Congruence between triangles implies they are the same. The following are used to prove congruences:
Irrational constant representing the ratio between a circle's circumference and diameter
\( \pi = \frac{C}{d} = \int_{-1}^{1} \frac{dx}{\sqrt{1 - x^2}}\)
The angle of a triangle made from a chord and the center of the circle is double the angle made from that cord and a point on that circle
\( \angle AOC = 2 \angle ABC \)
Corollary of the inscribed angle theorem, any triangle made using the diameter of a circle with all edges of the triangle on the circumference is a right angle triangle.
\( \angle ABC = \frac{\pi}{2} \)
Line segment \(r\) from a point on a circle to its origin
Line segment \(d\) between two points on a circle and the circle's origin
\(d=2r\)
Line segment between two points on a circle
\(c=2r \sin (\frac{\theta}{2})\)
\(c=2\sqrt{-d(d+2r)}\)
A chord formed with angle \(\theta\) an isoceles triangle with the circle center with sides \(r\) and base \(c\). Splitting this into two symmetric right angle triangles resolves this formula
Smaller area partitioned by a chord
\(A= \frac{r^2}{2} (\theta - \sin(\theta)) \) where:
Area formed from two points on a circle
\(A = \frac{\theta}{2} r^2 \) where:
Length across the circumference between two points on a circle
\(\ell = r\theta \)
Straigh line without endpoints
Straight line that has endpoints
\(\frac{\sin (A)}{a} = \frac{\sin (B)}{b} =\frac{\sin (C)}{c}\)
\(a^2 = b^2 + c^2 -2ab \cos (A)\)
\( \frac{a-b}{a+b} = \frac{\tan ( \frac{\alpha - \beta}{2} ) }{\tan ( \frac{\alpha + \beta}{2} ) }\)
\( \sin \theta = (1- \cos \theta ) \tan ( \frac{ \pi - \theta }{2} ) \)
Set of ordered 2-tuples representing cartesian coordinates on a horizontal and vertical axis respectively.
\( (x,y)\in \mathbb{R}^2\)
Set of ordered 3-tuples representing cartesian coordinates on a depth, horizontal and vertical axis respectively.
\( (x,y,z) \in \mathbb{R}^3\)
Any trigonometric function \( f(x)=a\sin(x)+b\cos(x)\) where \(a,b > 0\) can be written as a single trigonometric function
\( f(x)=R\sin (x+\alpha) \)
Let \(a\sin (x) +b \cos (x) = R\sin (x+\alpha)\), then by apply additive angle formula to get \(a\sin (x) +b \cos (x) = R \cos (\alpha) \sin (x) + R\sin (\alpha) \cos (x)\)
By comparing both sides, it is seen that \(a=R \cos (\alpha),b= R \sin (\alpha)\), since \(\alpha\) is fixed.
\(R\) is proved as \(\sqrt{a^2 + b^2} = \sqrt{R^2(\cos^2 (\alpha) +\sin^2 (\alpha))} = R\)
Rearranging for \(\alpha \) proves \(\alpha = \cos^{-1} (\frac{a}{R}) = \sin^{-1} (\frac{b}{R})\)
For \( \csc, \sec\), simply reciprocate. For \( \tan\), divide sine and cosine values. As for inverse functions, work backwards.
If the sum of the indices of each trigonometric term is the same, it is said to be homogenous to the degree of that sum. To solve these, you can divide by \( \cos \) so as to get them all in a tangent function
\(\pi = 180^{\circ}\)
Value that exclusively represents magnitude
Array of cartesian values that form a mathematical object with direction and magnitude
\( \textbf{v} = \begin{pmatrix} x \\ y \\ z \end{pmatrix} \)
Scalar representing the magnitude of a vector
\( \| \textbf{v} \| =\sqrt{\sum^{n}_{i=1} v_{i}^2}\)
\( \| \textbf{v} \| \geq 0\)
\( \| \textbf{v} \| = 0 \iff \textbf{v} =\textbf{0} \)
\( \| \textbf{v} + \textbf{u} \| \leq \| \textbf{v} \| + \| \textbf{u} \| \)
\( \| c\textbf{v} \| = |c| \| \textbf{v} \|\)
Vector with a norm of 1, this formula turns any vector into a unit vector with the following values:
\(\hat{a}=\frac{\textbf{a}}{ \|\textbf{a}\| }\)
A special set of vectors \(\mathcal{E} = \{\hat{i}, \hat{j}, \hat{k}\}\) such that
It is noteworthy that \(\hat{i}, \hat{j},\hat{k}\) each have a magnitude of 1 and are all orthogonal (perpendicular) to eachother.
The elementary basis allow any vector \(\textbf{v}\) to be decomposed and rewritten in the follwing 'position vector' form
\( \textbf{v} = x\hat{i} + y\hat{j} + z\hat{k} \)
Vector operation that takes two vectors and return a scalar. It represents the product of two vector's magnitude scaled down by the angle at the common base of the vectors, hence the returned scalar is composed by accounting:
\(\textbf{u} \cdot \textbf{v}= \sum^{n}_{i=1} u_i v_i\)
Vector operation \(\text{proj}_{\textbf{u}}(\textbf{v})\) that returns the component vector of \(\textbf{v}\) along the direction of vector \(\textbf{u}\)
\( \text{proj}_{\textbf{u}}(\textbf{v}) = (\frac{\textbf{v} \cdot \textbf{u}}{\textbf{u} \cdot \textbf{u}})\textbf{u}\)
The component of \(\textbf{v}\) in the direction of \(\textbf{b}\) must have the same direction as \( \textbf{b}\), hence it is merely this vector dilated by some scalar \(k\hat{b}\). By geometric reasoning this constant is \(k= |\textbf{v}| \cos (\theta)\) and hence reducing this expression in terms of the dot product is \(k= |\textbf{v}| \cos (\theta) = \frac{|\textbf{b}||\textbf{v}|\cos (\theta)}{|\textbf{b}|} = \frac{\textbf{v} \cdot \textbf{b}}{|\textbf{b}|}\)
Adding two vector is geometrically equivalent as placing the starting point of \(\textbf{b}\) at \(\textbf{a}\) (or vice versa due to addition's commutative property)
\( ( \textbf{u} + \textbf{v} )_{i} = u_i + v_i \)
\( ( \textbf{u} - \textbf{v} )_{i} = u_i - v_i \)
Vectors can be subtracted by eachother to find the vector that shows the position of vector A to vector B, useful in finding planes
Two vectors multiplied that return a vector which is a normal to both the entered vectors
Vector operation that takes two vectors and return a vector perpendicular to the input vectors. Its magnitude represents the product of two vector's magnitude scaled up by the angle at the common base of the vectors, hence the returned vector is composed by accounting:
\( \textbf{u} \times \textbf{v}=|\textbf{u}||\textbf{v}|\sin (\theta) \hat{n}\)
When taking the cross product \(\textbf{a} \times \textbf{b}\), the direction of the result vector is parallel to your right thumb as you curl your fingers from \(\textbf{a}\) to \(\textbf{b}\)
A line can be interpreted as a vector with a scalar (so that the magnitude reaches any point on the line) that is translated away from the origin by another vector, or as an equation:
\(\textbf{r}(t)=\textbf{u}+t\textbf{v}\)
Without using vectors, a line is represented by the intersection of two planes (parametric equations)
\( (x - x_0)^{2}+ (y - y_0)^{2}+(z - z_0)^{2}=r^{2}\)
Furthermore, the vector equation
\( r = \| \textbf{x} - \textbf{u} \| \)
\(\textbf{r}(s,t) = \textbf{u}+s\textbf{v}+t\textbf{w}\)
\(\textbf{r}-\textbf{b}\) is parallel to the plane hence \((\textbf{r} - \textbf{a}) \cdot \textbf{n} = 0\)
\(d=ax+by+cz\)
The difference between any two points on the same plane results in another point on the place
Array containing \(m\) rows and \(n\) columns of numbers
\(\textbf{A}=\begin{bmatrix} a_{11} & a_{12} & \ldots & a_{1n} \\ a_{21} & a_{22} & \ldots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m1} & a_{m2} & \ldots & a_{mn} \end{bmatrix}\)
A matrix has a dimension denoted as \( m \times n \)
Previously, a geometry-based perception of vectors was defined. In an algebra-based sense, it is a matrix with one column \(n=1\)
\(\textbf{v}=\begin{bmatrix} v_{1} \\ v_{2} \end{bmatrix}\)
Matrixes are equal when:
Matrixes can be added together iff they have the same dimensions
\( (\textbf{A}+\textbf{B})_{ij} = \textbf{A}_{ij}+\textbf{B}_{ij}\)
Matrix with all quantities as zero
\(\textbf{0} = \begin{bmatrix}0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix}\)
\(\textbf{A}+\textbf{0}=\textbf{A}\)
\(\textbf{A}\textbf{0}=\textbf{0}\)
Matrixes can be multiplied by scalar by applying the scalar to every quantity in the matrix
\(k\begin{bmatrix}2 & 4 \\ 6 & 8 \end{bmatrix} =\begin{bmatrix}2k & 4k \\ 6k & 8k \end{bmatrix} \)
Matrixes can only be multiplied together if the number of columns in the first matrix equals the number of rows in the second matrix
\((\textbf{A}\textbf{B})_{ij}=\sum_{k=1}^{m}\textbf{A}_{ik}\textbf{B}_{kj}\)
A square matrix \(\textbf{I}\) such that:
See Linear Algebra for information on the Kronecker delta function
\(\textbf{I}_{3}=\begin{bmatrix}1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1\end{bmatrix}\)
Square matrixes \(\textbf{A} : n \times n\) may have an inverse matrix, which is a unique matrix such that:
\( \textbf{A}^{-1} :\textbf{A}^{-1}\textbf{A}=\textbf{A}\textbf{A}^{-1}=\textbf{I}\)
By definition only square matrixes may have inverse matrix.
Formula to quickly invert \(2 \times 2\) matrixes
\(\begin{bmatrix}a & b \\ c & d\end{bmatrix}^{-1}=\frac{1}{\Delta}\begin{bmatrix}d & -b \\ -c & a\end{bmatrix}\)
The function of swapping the row index and column index, denoted as \(\textbf{A}^{T}\).
\( \textbf{A}^{T} : a_{ij} \to a^{T}_{ji} \)
\(\textbf{A} \text{is symmetric} \iff \textbf{A} = \textbf{A}^{T}\)
In situation \(\textbf{A}\textbf{x}=\textbf{b}\) where \(\textbf{A}\) is a square matrix:
Let \(I_j(\textbf{x}) \) be the identity matrix with column j swapped for \(\textbf{x}\). Then by matrix multiplication \(A I_j (\textbf{x}) = A_j (\textbf{b}) \) (Since \(A \textbf{x} = \textbf{b}\) ). Determinants have a multiplicative property, so therefore \(\text{det} (A) \text{det} (I_{j} (\textbf{x})) = \text{det} (A_j (\textbf{b})) \). Because \(\text{det}(A) \neq 0 \land \text{det} (I_j (\textbf{x})) = \textbf{x}_j\), Cramer's rule is proved.
In augmented matrixes, the following operations are legal:
These operations are essentially using information of simultaneous equations to reform the equations into a desired form
Through row operations, there is an algorithm running in \(O (n^3 ) \) to solve augmented matrixes as such:
Matrix with upper right corner filled with 0s
Matrix with lower left corner filled with 0s
Matrix filled with 0s except for indexes where the column number and row number are the same
\( \textbf{M} \text{ is diagonal} \iff ( i \neq j \implies m_{ij} = 0) \)
\( \textbf{M} \text{ is diagonal} \implies \textbf{M}^k = \begin{bmatrix} m^{k}_{11} & 0 & \ldots & 0 \\ 0 & m^{k}_{22} & \ldots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \ldots & m^{k}_{nn} \end{bmatrix} \)
A leading entry is the first non-zero value in a row or column, for instance, you can find the leading entry for the 5th column, 2nd row, 3rd column ans so forth
Combining two matrixes together side by side; the augmentation of A and B is represented as \(\textbf{A}|\textbf{B}\). Its motive is primarily to be a neat layout when performing Gaussian elimination techniques.
Rectangular matrix following the following criteria that is to follow the following of this followed paragraph:
\(\begin{bmatrix}a & b & c \\ 0 & d & e \\ 0 & 0 & f\end{bmatrix}\)
Along with satisfying echelon form, reduced echelon forms also follow the following:
When converting from echelon form to reduced echelon form, the pivot points are the leading entries of each row, which will all be turned into number 1s
Scalar function \(\text{det}(\textbf{M}_n)\) defined to characterizes the existence of an inverse matrix for some square matrix \(\textbf{M}_n\), similar to how polynomial's discriminant characterises the existance of real roots.
\(\text{det} (\textbf{M}_n) = 0 \iff \textbf{M}_{n}^{-1} = \text{Undefined}\)
Geometrically, the determinant \(\text{det}(M_n)\) also returns the signed area/volume of matrix multiplying each vector representing vertexes of a unit \(n\)-cube by \(M_n\)
\(\text{det}(M_{2})=\begin{vmatrix} m_{11} & m_{12} \\ m_{21} & m_{22} \end{vmatrix}=m_{11}m_{22}-m_{12}m_{21}\)
\(\text{det}(M_{3})= m_{11}(m_{22}m_{33} - m_{23}m_{32}) -m_{12}(m_{21}m_{33} - m_{23}m_{31}) + m_{13}(m_{21}m_{33} - m_{22}m_{31}) \)
A scalar that is a part of an expansion to return a determinant
\(C_{ij}=(-1)^{i+j}\text{det} (\textbf{M}_{ij})\)
Where the \(\text{det} (A_{ij})\) notation means a matrix that contains every value in its same order so that it does not have the same i or j value
\(\text{det} (\textbf{M}_{n}) = \sum_{j=1}^{n}m_{ij}C_{ij} : (i \in \mathbb{N}) \land (i \leq n)\)
\(\text{det} (\textbf{M}_{n}) = \sum_{i=1}^{n}m_{ij}C_{ij} : (j \in \mathbb{N}) \land (j \leq n)\)
Sum of all diagonal terms of a square matrix
\( \text{tr}(\textbf{M}) = \sum_{j=1}m_{jj} \)
\(\textbf{A}\textbf{x}=\textbf{0}\)
\(\textbf{A}\textbf{x}=\textbf{b} : \textbf{b} \neq \textbf{0}\)
\(\textbf{A}\textbf{x}=\textbf{0} \land A\textbf{y}=\textbf{b} \implies A ( \textbf{x} + \textbf{y} ) = \textbf{b}\)
See discrete math
Mathematical condition that may hold between different elements of a set, examples include inequalities, equations, functions, diophantine equations etc.
A mathematical object \(f : X \to Y\) that takes an object in set \(X\) (domain) as an input and produces an output as some object in set \(Y\) (range)
\(f \text{ is injective } \iff (f(x)=f(y) \implies x=y) \)
For some injective function \(f : X \to Y\), there is an inverse function \( f^{-1} : Y \to X\). When a function is not injective, one can restrict the domain to ensure an injection and then invert
\(f( f^{-1}(x) ) = x\)
Function of the product of all natural numbers up to a specific integer, iwith \(0!=1\)
\(n!=(n-1)!n\)
\(n!=\prod_{k=1}^{n}k\)
\(\text{exp}(z)=e^{z}\)
\( \text{exp} : \mathbb{C} \to \mathbb{C}\)
\( \text{exp}(x+iy) = e^x \cos y + ie^x \sin y \)
\( y'=y \)
Inverse function of the exponential function for real numbers
\( \ln : (0,\infty) \to \mathbb{R}\)
Geometrically the trigonometric functions are interpreted as the set of \(2\pi\)-periodic functions that satisfy the relation \( x^2 + y^2 =1 \)
Analytically, they are defined as the solutions to the following IVP
\( \sin z = \sum^{\infty}_{n = 0} \frac{(-1)^n z^{2n+1}}{(2n+1)!} \)
\( \sin z = \frac{e^{iz} - e^{-iz}}{2i} \)
\( \sin (x+iy) = \sin x \cosh y + i \cos x \sinh y \)
\( \cos z = \sum^{\infty}_{n = 0} \frac{(-1)^n z^{2n}}{(2n)!} \)
\( \cos z = \frac{e^{iz} + e^{-iz}}{2} \)
\( \sin (x+iy) = \cos x \cosh y - i \sin x \sinh y \)
\( \sin z = \sum^{\infty}_{n = 0} \frac{(-1)^n z^{2n+1}}{(2n+1)!} \)
\( \sin z = \frac{e^{iz} - e^{-iz}}{2i} \)
\( \sin (x+iy) = \sin x \cosh y + i \cos x \sinh y \)
\( \cos z = \sum^{\infty}_{n = 0} \frac{(-1)^n z^{2n}}{(2n)!} \)
\( \cos z = \frac{e^{iz} + e^{-iz}}{2} \)
\( \sin (x+iy) = \cos x \cosh y - i \sin x \sinh y \)
The value that \(f(x)\) approaches as \(x\) approaches \(a\), shown with \(\lim_{x \to a} f(x)=L\). More specifically, you can have:
\(L=\lim_{x \to a} f(x) \iff \lim_{x \to a^{-}} f(x)=\lim_{x \to a^{+}} f(x)\)
See Real Analysis for a more mathematically correct definition of a limit.
Irrational constant that is the result of an 'infinite exponent' to a base which is 'infinitely close' to 1
\( e = \lim_{n \to \infty} (1 + \frac{1}{n})^{n} = \sum_{k=1}^{\infty} \frac{1}{n!}\)
\(bf[a(x-h)]+v\)
\( f'(x) = \lim_{h\to 0}\frac{f(x+h)-f(x)}{h}\)
\( (fg)'(x) = f(x)g'(x)+ f'(x)g(x) \)
\( (\frac{f}{g})'(x) = \frac{f(x)g'(x) - f'(x)g(x)}{g^2(x)} \)
\( (f \circ g)'(x) = f'(g(x))g'(x)\)
\(\frac{dy}{dx} = \frac{dy}{du} \frac{du}{dx}\)
\( (f^{-1})' \circ f = \frac{1}{f'} \)
\(\frac{d}{dx}(c) = 0\)
\(\frac{d}{dx}(x^{n}) = nx^{n-1}\)
\(\frac{d}{dx}(a^{f(x)}) = f'(x) \ln (a) a^{f(x)}\)
Series relating to a function that approximates and for analytical functions converges to said function
\( f(x) = \sum_{n=0}^{\infty} \frac{f^{n}(a)(x-a)^n}{n!}\)
See Real Analysis for more information on Taylor series.
Recursive formula for computing the roots of a function. It works based on the idea that the gradient from some place on the function to the place on that function that is a root should be the derivative (since the derivative is the function that represents a function's gradient). This means:
\(f'(x_{n})=\frac{f(x_{n}) - 0}{x_n - x_{n+1}}\)
\( \implies x_{n+1}=x_{n}-\frac{f(x_{n})}{f'(x_n)}\)
Infinitesimal relating to the difference of some variable. This concept is not rigorous, however produces accurate results for engineering, physics etc.
\( \Delta x = x_1 - x_0 \)
\( \Delta y = f(x_1) - f(x_0) \)
\( dy = \frac{dy}{dx} dx\)
\(\Delta y \approx \frac{dy}{dx} \Delta x\)
For implicit equations, one can apply differentiation to the \(x\) and apply the chain rule when differentiatin \(y\)
A linear approximation of a function at a specific point. For instance at point \(x_{0}\):
\(f(x)-f(x_{0})=f'(x_{0})(x-x_{0})\)
Considering the function \(f\), there may exist a function \(F\) such that when differentiated returns \(f\)
\(F \text{is an antiderivative of } f \iff F' = f\)
Simply the antiderivative
\(\displaystyle \int f(x)dx = F(x)\)
The area bound between function \(f\) and the baseline \(y=0\) on some interval \( (a,b) \)
\(I = \int_{a}^{b} f(x)dx\)
Numerical method of approximating integrals without employing the FTC. It is a consequence of Riemann's defintion of the integal; see Real Analysis
\(\int^{b}_{a} f(x) dx = \lim_{n \to \infty} \sum_{i=0}^{n} f(x_{i}) \Delta x\)
Improved variant of the Riemann sum
\(\int^{b}_{a} f(x) = \lim_{n \to \infty} \sum_{i=1}^{n} \frac{f(x_{i-1})+f(x_{i})}{2} \Delta x\)
Theorem asserting that the indefinite integral of a function is its anti-derivative. See Real Analysis
\( \frac{d}{dx}\int_{0}^{x} f(x) dx = f(x) \)
\(F' =f \implies \int_{b}^{a} f(x) dx = F(b)-F(a) \)
\( f(-x)=f(x) \implies \int_{-k}^{k}f(x)dx= 2 \int_{0}^{k}f(x)dx\)
\( f(-x)=-f(x) \implies \int_{-k}^{k}f(x)dx= 0\)
Encapsulating a subfunction as a variable \(u\) and using the chain rule in reverse provides the following
\( \int (f \circ u)(x) u'(x) dx = \int f(u) du \)
Using the product rule in reverse provides the following
\(\int uv'(x) dx = uv-\int vu' dx\)
One generally gets better results if you choose \(u\) to be the factor that is more 'reducible', that has more of a dramatic change when derived. This acronym gives a good idea on what order:
Logarithmic
Inverse trigonometric
Algebraic
Trigonometric
Exponential
For rational functions of polynomials, algebraic decomposition the fraction is optimal for integration
Let \(p(x),q(x)\) be polynomials such that \( \deg (p) \lt \deg (q) \)
\(q(x) = \prod^{\deg (q)}_{n=1} q^{d}_{n}(x) : \deg (q^{d}_{n}) = d\)
\(\frac{p(x)}{q(x)} = \sum_{n=1}^{\deg (q)} \frac{k_n(x)}{q^{d}_{n}(x)} : \deg (k_n) = \deg (p^{d}_n) -1\)
To evaluate the coefficients of each \(k_n\), solve the equality with each \(x : p^{d}_{n}(x) = 0 \)
Recurrence relation relating an integral to other integrals that may be simpler to calculate.
\( \displaystyle W_n = \int^{\frac{\pi}{2}}_{0} \sin^n (x) dx\)
\( W_n = \frac{n-1}{n} W_{n-2}\)
\( \int \sin^m (x) \cos^n (x) dx\)
\( \int \sec^m (x) \cos^n (x) dx\)
For trigonometric functions, the substitution \(t=\tan (\frac{x}{2})\) can be made with the following formulae being applied:
\(\int e^{f(x)} dx = \frac{e^{f(x)}}{f'(x)} +C\)
\(\int k^{f(x)} dx = \frac{k^{f(x)}}{f'(x) \ln (k)} +C\)
The following functions have the algebraic semblance of some trigonometric ratios (pythagorean theorem), and hence substitution of a trigonometric variable can be performed to prove the following.
To find the volume of a function rotated along the x axis, you can use the following formula to find the volume
\( \int^{a}_{b} \pi f(x)^2 dx\)
This is because when rotating around the x axis, \( f(x) \) is a radius and the area of the circle it makes when rotating is as we know, \(\pi r^2\), and all those areas combined make the volume.
A number satisfying the following equations
A number representing the sum of a real number and imaginary number. The set of these numbers is denoted as \(\mathbb{C}\). Technically, this is field extension of the real numbers by the imaginary unit \(\mathbb{R}(i)\)
\(\mathbb{C} = \{ z : z = x + iy, x,y \in \mathbb{R} \} \)
\(z = x+yi\)
Unary complex operator that inverts the sign of the imaginary part
\(z = x+yi \implies \bar{z},z^{*} = x-yi\)
\(z = x+yi \implies \bar{z},z^{*} = \Re(z)-i\Im(z)\)
Real function describing the distance of a complex number from 0. It is also called the complex norm
\(|x+iy|=\sqrt{x^{2}+y^{2}}\)
\(|z|=\sqrt{ \Re(z)^2+\Im(z)^{2}}\)
Algebra is the same as real numbers, \(i\) can be manipulated like an algebraic variable (subject to its unique rule). Two complex numbers are equal if their real and imaginary parts are both identical
\(\arg (z) = \theta \in (-\pi,\pi] : z = |z| e^{i \theta}\)
To represent a complex number in cylindrical coordinates rather than cartesian, you can use the polar form:
\(z = r(\cos (\theta)+i\sin (\theta)) = r \text{cis} (\theta) \)
\(\forall z \in \mathbb{C} [ e^{i z} = \cos ( z ) + i \sin ( z ) ] \)
\(z = re^{i\theta}\)
You should be able to prove all identites in this entire document
\(||z| - |w|| \leq |z+w| \leq |z| + |w| \)
\(z^n = re^{i \theta}\)
\(z = re^{\frac{i \theta + 2\pi k}{n}}, k \in \mathbb{N}\cap[0,n)\)
Method to find complex zeroes of \(f(z)=ax^2 + bx + c\)
\(z^2+\lambda^2=(z+i\lambda)(z-i\lambda)=0 \implies z=\pm i\lambda\)
Simultaneous equations that the real and imaginary parts of a complex number satisfies to be a square root; however it is probably easier to use Euler's formula.
\(z = x+iy \land w=a+ib \land z^2 = w \implies \)
\( z^n = r^{n} e^{ni\theta}\)
\( z^n = r^{n} (\cos (n\theta) + i\sin (n\theta) ) \)
\( z^{\frac{1}{n}} = r^{\frac{1}{n}} e^{\frac{i\theta+2\pi k}{n}}\)
\( k [0,n]\)
The highest order derivative in a differential equation
The highest power applied to a derivative in a differential equation
Equations relating a function and its differential functions, for instance, \(y'=y\) is a differential equation representing functions where the first derivative of a function equals the function itself. The solution to this equation is \(y=ke^x\)
There are multiple types of differential equations such as:
Specially devised function multiplied to an equation to facilitate integration. It is used in the solution for first order linear equations
\(y' =f(y)g(x)\)
\(y' + q_1 (x) y = q_2 (x)\)
\( y'' + m y' + n y = 0 \)
Additional condition that a DE's solution must satisfy (commonly the image at a certain domain element of the function or its derivatives)
\(y(x_0) = y_0\)
\(\begin{cases} y(x_0) = y_0 \\ y(x_1) = y_1 \end{cases} \)
Differential equation provided with an initial condition
Differential equation provided with boundary condition
\( y'' + m y' + n y = q (x) \)
See Real Analysis
\( \sum ka_{n} = k \sum a_{n}\)
\( f(x)=ax^2 + bx + c \land (\exists \alpha,\beta \in \mathbb{R}| \alpha\beta=ac \land \alpha+\beta=b) \implies f(x)=ax^2+\alpha x +\beta x + c=(ax+\alpha)(x+\beta) \)
\( \deg(P)=2 \land P(x)=0 \implies x=\frac{-b \pm \sqrt{b^2 - 4ac}}{2a}\)
To obtain a single variable, the proof hinges off the fact for the quadratic case, the square can be completed by adding the appropriate term, and in this case, adding \( \frac{b^2}{4a^2} \) to obtain \(x^2 + 2(\frac{b}{2a})x+ \frac{b^2}{4a^2} = (x+\frac{b}{2a})^2 \) is the ideal form without adding any terms that include \(x\). A formal proof is offered below.
\(ax^{2}+bx+c=0\)
\(x^{2}+\frac{b}{a}x+\frac{b^2}{4a^2}+\frac{c}{a}=\frac{b^2}{4a^2}\)
\( (x+\frac{b}{2a})^{2}+\frac{c}{a}=\frac{b^2}{4a^2}\)
\( (x+\frac{b}{2a})^{2}=\frac{b^2 -4ac}{4a^2}\)
\( x+\frac{b}{2a}= \pm \sqrt{\frac{b^2 -4ac}{4a^2}}\)
\( x=\frac{-b \pm \sqrt{b^2 - 4ac}}{2a}\)
A value that characterises the existance of real roots for a polynomial
\(\Delta = b^2 - 4ac\)
\(\Delta = 0 \iff r_{0} \in \mathbb{R} \land r_1 \notin \mathbb{R}\)
\(\Delta > 0 \iff r_{0},r_{1} \in \mathbb{R} \)
\(\Delta < 0 \iff r_{0},r_{1} \notin \mathbb{R} \)
Quadratic case of binomial theorem (see Discrete Mathematics)
\( x^2 +2kx + k^2 = (x+k)^2 \)
\( x^2 -2kx + k^2 = (x-k)^2 \)
\( x^n - y^n = (x-y)(\sum_{k=0}^{n-1} x^{n-1-k} y^{k} ) \)
\( x^2 - y^2 = (x+y)(x-y) \)
\(P(x)=\sum_{n=0}^{\deg (P)} a_{n} x^{n}\)
\(P(x)=\prod_{n=0}^{\deg (P)} (x - c_{n})\)
Polynomial degree refers to the highest power in a polynomial
Domain element of a function that evaluates to zero, also called a root
\(c \text{ is a zero of }f \iff f(c)=0\)
Property of a zero regarding how many times a polynomial has that same root
For instance, the polynomial \(P(x)=(x-6)^2 (x-2)\) has the root 6 with a multiplicity of 2
A polynomial of degree \(n\) has \(n\) complex roots (not all may be real)
\( \deg (p(x)) = n \iff p(x)= a\prod_{k=1}^{n} (x - z_k) : z_k \in \mathbb{C} \)
\(\exists \frac{p}{q} \in \mathbb{Q} : \gcd(p,q) = 1 \land P(\frac{p}{q}) = 0 \implies p|a_0 \land q|a_{\deg(P)}\)
\(P(\frac{p}{q})=0 \implies \)
\(\sum_{n=0}^{\deg (P)} a_{n} (\frac{p}{q})^{n} = 0\)
\(\sum_{n=0}^{\deg (P)} a_{n} p^{n} q^{\deg(P) -n} = 0\)
\(p(\sum_{n=1}^{\deg (P)} a_{n} p^{n-1} q^{\deg(P) -n}) = -a_{0} q^{\deg (P)}\)
By Euclid's lemma (see Discrete Mathematics), \(p | a_{0}\) since it cannot divide \(p^{\deg (P)}\) due to the assumption that \(\gcd(p,q)=1 \)
\(q(\sum_{n=0}^{\deg (P) -1} a_{n} p^{n} q^{\deg(P) -n-1}) = -a_{\deg (P)} p^{n}\)
By similar reasoning, \(q | a_{\deg(P)}\)
To divide polynomials into form\(\frac{P(x)}{D(x)} = Q(x) + \frac{R(x)}{D(x)} \):
\( P(x) = Q(x)(x-a) + P(a) \)
The remainder of dividing \(P(x)\) by \(x-a\) is \(P(a)\). This is because fundamentally \(P(x) = Q(x)(x-a) + r\) (r is a constant since the degree of the remainder is 0 according to the polynomial division theorem) and when x is a, \(P(a) = Q(a)(0) + r\), so the term multiplying the quotient and divisor zero out
And as a natural extension to the remainder theorem, \((x-a)\) can only be a factor of \(P(x)\) if and only if \(P(a)=0\)
\( (x-a) | P(x) \iff P(a)=0\)
If a polynomial has the same amount of real zeroes as its degree, then the polynomial has some model \(P(x) = a \prod^{\deg (P)}_{k=1} (x-a_k)\), where \(a_{k}\) represents each zero. This is because if even one of these equations in the product series zero out due to x being equal to one of the \(a_{k}\), the whole polynomial zeroes out due to multplication of the rest of the function by zero
For \(ax^2 + bx + c\), it can be shown with a bit of manipulation that with zeroes \(\alpha,\beta\):
For \(ax^3 + bx^2 + cx + d\), it can be shown with a bit of manipulation that with zeroes \(\alpha,\beta,\gamma\):
For \(ax^4 + bx^3 + cx^2 + dx + e\), it can be shown with a bit of manipulation that with zeroes \(\alpha,\beta,\gamma,\delta\):
Let \(d= \deg(P(x))\), \(r_{ij}\) is the sequence of the combinations of \(i\) zeroes in multiplication, \(a_n\) be a sequence of polynomial coefficients and \(k \in \mathbb{N} \cap [1,\deg(P(x))]\)
\( \sum_{i=1}^{\binom{d}{k}} (\prod^{k}_{j=1} r_{ij}) = (-1)^{k} \frac{a_{d-k}}{a_{d}} \)
If a zero has a multiplicity higher than one, it is a stationary point
A zero has a multiplicity of m if \(P(x) = (x-a)^{m} Q(x), Q(a) \neq 0\). It can be shown with calculus that zeroes with multiplicity above 1 are stationary points. Because with some deriving and substituting you end up at \( P'(x) = (x-a)^{m-1}R(x) \) where \(R(x) = mQ(x)+(x-a)Q'(x)\), and since we know that \(Q(a) \neq 0\) we can substitute to find that \(R(a) \neq 0\). This proves that the \((x-a)^{m-1}\) is doing the zeroing out, and so these roots are also stationary points. And intuitively, if the multiplicity is even then the stationary point is not a turning point
All polynomials with real coefficients can be written as the product of linear factors or irreducible quadratics (y'know, in the case that there are no zeroes). A basic form of this as well as proof is below:
When all zeroes are real, \( P_{n} (x) = a_{n} \prod_{k=1}^{n} (x-\alpha_{k}) \)
When all zeroes are complex, \( P_{n} (x) = a{n} \prod_{k=1}^{\frac{n}{2}} (x-\alpha_{k})(x-\overline{\alpha_{k}}) \implies a{n} \prod_{k=1}^{\frac{n}{2}} (x^2-2 \Re (\alpha_{k})x) + |\alpha_{k}|^2) \)
With real and complex zeroes, \( P_{n} (x) = a_{n} \times (\prod_{k=1}^{j} (x-\alpha_{k})(x-\overline{\alpha_{k}})) (\prod_{l=2j+1}^{n} (x-\alpha_{l})) \) (The first pi product deals with all complex roots with their conjugate pair in one iteration, this is why we start the real roots at \(2j+1\). We can see from the case where all zeroes are complex that this also reduces to linear factors and irreducible quadratics)
Sequence of polynomials \(T_n\) relating to cosine relations of an angle with factor \(n\)
\(T_n : T_n ( \cos(\theta) ) = \cos (n \theta) \)
Sequence of polynomials \(U_n\) relating to sine relations of an angle with factor \(n\)
\(U_n : U_{n-1} ( \cos(\theta) ) \sin (\theta) = \sin (n \theta) \)
Sequence of polynomials \(P_n\) forming an orthogonal basis for \(L^2 [-1,1]\) with weight \(1\)