33130 - Mathematics 1


Geometry

Pythagorean theorem

Euclidean distance is represented by the following geometric property:

\(a^{2}+b^{2}=c^{2}\)

Angle sum

\(\text{Angle sum of an n-gon} = 180(n-2)\)

Angle equivalences

Triangle congruences

Congruence between triangles implies they are the same. The following are used to prove congruences:

Pi (\( \pi \))

Irrational constant representing the ratio between a circle's circumference and diameter

\( \pi = \frac{C}{d} = \int_{-1}^{1} \frac{dx}{\sqrt{1 - x^2}}\)

Inscribed angle theorem

The angle of a triangle made from a chord and the center of the circle is double the angle made from that cord and a point on that circle

\( \angle AOC = 2 \angle ABC \)

Thales' theorem

Corollary of the inscribed angle theorem, any triangle made using the diameter of a circle with all edges of the triangle on the circumference is a right angle triangle.

\( \angle ABC = \frac{\pi}{2} \)

Radius Raggio 半径

Line segment \(r\) from a point on a circle to its origin

Diameter

Line segment \(d\) between two points on a circle and the circle's origin

\(d=2r\)

Chord

Line segment between two points on a circle

\(c=2r \sin (\frac{\theta}{2})\)

\(c=2\sqrt{-d(d+2r)}\)

Proof

A chord formed with angle \(\theta\) an isoceles triangle with the circle center with sides \(r\) and base \(c\). Splitting this into two symmetric right angle triangles resolves this formula

Segment

Smaller area partitioned by a chord

\(A= \frac{r^2}{2} (\theta - \sin(\theta)) \) where:

Sector

Area formed from two points on a circle

\(A = \frac{\theta}{2} r^2 \) where:

Arc

Length across the circumference between two points on a circle

\(\ell = r\theta \)

Straight line

Straigh line without endpoints

Line segment

Straight line that has endpoints

Sine rule

\(\frac{\sin (A)}{a} = \frac{\sin (B)}{b} =\frac{\sin (C)}{c}\)

Cosine rule

\(a^2 = b^2 + c^2 -2ab \cos (A)\)

Tangent rule

\( \frac{a-b}{a+b} = \frac{\tan ( \frac{\alpha - \beta}{2} ) }{\tan ( \frac{\alpha + \beta}{2} ) }\)

\( \sin \theta = (1- \cos \theta ) \tan ( \frac{ \pi - \theta }{2} ) \)

2D Set

Set of ordered 2-tuples representing cartesian coordinates on a horizontal and vertical axis respectively.

\( (x,y)\in \mathbb{R}^2\)

3D Set

Set of ordered 3-tuples representing cartesian coordinates on a depth, horizontal and vertical axis respectively.

\( (x,y,z) \in \mathbb{R}^3\)

Auxiliary angle theorem

Any trigonometric function \( f(x)=a\sin(x)+b\cos(x)\) where \(a,b > 0\) can be written as a single trigonometric function

\( f(x)=R\sin (x+\alpha) \)

Proof

Let \(a\sin (x) +b \cos (x) = R\sin (x+\alpha)\), then by apply additive angle formula to get \(a\sin (x) +b \cos (x) = R \cos (\alpha) \sin (x) + R\sin (\alpha) \cos (x)\)

By comparing both sides, it is seen that \(a=R \cos (\alpha),b= R \sin (\alpha)\), since \(\alpha\) is fixed.

\(R\) is proved as \(\sqrt{a^2 + b^2} = \sqrt{R^2(\cos^2 (\alpha) +\sin^2 (\alpha))} = R\)

Rearranging for \(\alpha \) proves \(\alpha = \cos^{-1} (\frac{a}{R}) = \sin^{-1} (\frac{b}{R})\)

Trigonometic values

For \( \csc, \sec\), simply reciprocate. For \( \tan\), divide sine and cosine values. As for inverse functions, work backwards.

Trigonometric identities

Additive angles formulae

Double angle formulae

Homogenous trigonometric equations

If the sum of the indices of each trigonometric term is the same, it is said to be homogenous to the degree of that sum. To solve these, you can divide by \( \cos \) so as to get them all in a tangent function

Radian definition

\(\pi = 180^{\circ}\)

Scalars Scalari スカラー

Value that exclusively represents magnitude

Vector Vettore ベクター

Array of cartesian values that form a mathematical object with direction and magnitude

\( \textbf{v} = \begin{pmatrix} x \\ y \\ z \end{pmatrix} \)

Norm

Scalar representing the magnitude of a vector

\( \| \textbf{v} \| =\sqrt{\sum^{n}_{i=1} v_{i}^2}\)

Properties

\( \| \textbf{v} \| \geq 0\)

\( \| \textbf{v} \| = 0 \iff \textbf{v} =\textbf{0} \)

\( \| \textbf{v} + \textbf{u} \| \leq \| \textbf{v} \| + \| \textbf{u} \| \)

\( \| c\textbf{v} \| = |c| \| \textbf{v} \|\)

Unit vector

Vector with a norm of 1, this formula turns any vector into a unit vector with the following values:

\(\hat{a}=\frac{\textbf{a}}{ \|\textbf{a}\| }\)

Elementary basis

A special set of vectors \(\mathcal{E} = \{\hat{i}, \hat{j}, \hat{k}\}\) such that

It is noteworthy that \(\hat{i}, \hat{j},\hat{k}\) each have a magnitude of 1 and are all orthogonal (perpendicular) to eachother.

Position vector Vettore posizione ベクター位置

The elementary basis allow any vector \(\textbf{v}\) to be decomposed and rewritten in the follwing 'position vector' form

\( \textbf{v} = x\hat{i} + y\hat{j} + z\hat{k} \)

Dot product Prodotto scalare ドット積

Vector operation that takes two vectors and return a scalar. It represents the product of two vector's magnitude scaled down by the angle at the common base of the vectors, hence the returned scalar is composed by accounting:

\(\textbf{u} \cdot \textbf{v}= \sum^{n}_{i=1} u_i v_i\)

Properties

Vector projection Proiezione Vettore 射影ベクター

Vector operation \(\text{proj}_{\textbf{u}}(\textbf{v})\) that returns the component vector of \(\textbf{v}\) along the direction of vector \(\textbf{u}\)

Definitions

\( \text{proj}_{\textbf{u}}(\textbf{v}) = (\frac{\textbf{v} \cdot \textbf{u}}{\textbf{u} \cdot \textbf{u}})\textbf{u}\)

Proof of correctness

The component of \(\textbf{v}\) in the direction of \(\textbf{b}\) must have the same direction as \( \textbf{b}\), hence it is merely this vector dilated by some scalar \(k\hat{b}\). By geometric reasoning this constant is \(k= |\textbf{v}| \cos (\theta)\) and hence reducing this expression in terms of the dot product is \(k= |\textbf{v}| \cos (\theta) = \frac{|\textbf{b}||\textbf{v}|\cos (\theta)}{|\textbf{b}|} = \frac{\textbf{v} \cdot \textbf{b}}{|\textbf{b}|}\)

Vector addition

Adding two vector is geometrically equivalent as placing the starting point of \(\textbf{b}\) at \(\textbf{a}\) (or vice versa due to addition's commutative property)

\( ( \textbf{u} + \textbf{v} )_{i} = u_i + v_i \)

Vector subtraction

\( ( \textbf{u} - \textbf{v} )_{i} = u_i - v_i \)

Vectors can be subtracted by eachother to find the vector that shows the position of vector A to vector B, useful in finding planes

Cross product Prodotto incrociato クロス積

Two vectors multiplied that return a vector which is a normal to both the entered vectors

Vector operation that takes two vectors and return a vector perpendicular to the input vectors. Its magnitude represents the product of two vector's magnitude scaled up by the angle at the common base of the vectors, hence the returned vector is composed by accounting:

\( \textbf{u} \times \textbf{v}=|\textbf{u}||\textbf{v}|\sin (\theta) \hat{n}\)

Properties

Right hand rule Regola del cacciavite 右手の法則

When taking the cross product \(\textbf{a} \times \textbf{b}\), the direction of the result vector is parallel to your right thumb as you curl your fingers from \(\textbf{a}\) to \(\textbf{b}\)

Straight line

A line can be interpreted as a vector with a scalar (so that the magnitude reaches any point on the line) that is translated away from the origin by another vector, or as an equation:

Vector form

\(\textbf{r}(t)=\textbf{u}+t\textbf{v}\)

Without using vectors, a line is represented by the intersection of two planes (parametric equations)

Sphere Sfero 球

Standard form

\( (x - x_0)^{2}+ (y - y_0)^{2}+(z - z_0)^{2}=r^{2}\)

Furthermore, the vector equation

Vector form

\( r = \| \textbf{x} - \textbf{u} \| \)

Plane

Vector form

\(\textbf{r}(s,t) = \textbf{u}+s\textbf{v}+t\textbf{w}\)

\(\textbf{r}-\textbf{b}\) is parallel to the plane hence \((\textbf{r} - \textbf{a}) \cdot \textbf{n} = 0\)

Standard form

\(d=ax+by+cz\)

The difference between any two points on the same plane results in another point on the place

Linear Algebra

Matrix Matrice 行列

Array containing \(m\) rows and \(n\) columns of numbers

\(\textbf{A}=\begin{bmatrix} a_{11} & a_{12} & \ldots & a_{1n} \\ a_{21} & a_{22} & \ldots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m1} & a_{m2} & \ldots & a_{mn} \end{bmatrix}\)

Dimension

A matrix has a dimension denoted as \( m \times n \)

Vector Vettore ベクトル

Previously, a geometry-based perception of vectors was defined. In an algebra-based sense, it is a matrix with one column \(n=1\)

\(\textbf{v}=\begin{bmatrix} v_{1} \\ v_{2} \end{bmatrix}\)

Matrix equality

Matrixes are equal when:

Matrix addition Addizione di mattrici 行列の総和

Matrixes can be added together iff they have the same dimensions

\( (\textbf{A}+\textbf{B})_{ij} = \textbf{A}_{ij}+\textbf{B}_{ij}\)

Zero matrix

Matrix with all quantities as zero

\(\textbf{0} = \begin{bmatrix}0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix}\)

\(\textbf{A}+\textbf{0}=\textbf{A}\)

\(\textbf{A}\textbf{0}=\textbf{0}\)

Scalar multiplication Moltiplicazione scalare

Matrixes can be multiplied by scalar by applying the scalar to every quantity in the matrix

\(k\begin{bmatrix}2 & 4 \\ 6 & 8 \end{bmatrix} =\begin{bmatrix}2k & 4k \\ 6k & 8k \end{bmatrix} \)

Matrix multiplication Moltiplicazione di mattrici

Matrixes can only be multiplied together if the number of columns in the first matrix equals the number of rows in the second matrix

\((\textbf{A}\textbf{B})_{ij}=\sum_{k=1}^{m}\textbf{A}_{ik}\textbf{B}_{kj}\)

Identity matrix

A square matrix \(\textbf{I}\) such that:

See Linear Algebra for information on the Kronecker delta function

Example

\(\textbf{I}_{3}=\begin{bmatrix}1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1\end{bmatrix}\)

Invertibility Invertibilità 反転行列

Square matrixes \(\textbf{A} : n \times n\) may have an inverse matrix, which is a unique matrix such that:

\( \textbf{A}^{-1} :\textbf{A}^{-1}\textbf{A}=\textbf{A}\textbf{A}^{-1}=\textbf{I}\)

By definition only square matrixes may have inverse matrix.

Inversion formula Formula dinversione 反転式

Formula to quickly invert \(2 \times 2\) matrixes

\(\begin{bmatrix}a & b \\ c & d\end{bmatrix}^{-1}=\frac{1}{\Delta}\begin{bmatrix}d & -b \\ -c & a\end{bmatrix}\)

Transposition

The function of swapping the row index and column index, denoted as \(\textbf{A}^{T}\).

\( \textbf{A}^{T} : a_{ij} \to a^{T}_{ji} \)

Symmetric matrix

\(\textbf{A} \text{is symmetric} \iff \textbf{A} = \textbf{A}^{T}\)

Matrix properties

Cramer's rule Regola di Cramer クラメルの公式

In situation \(\textbf{A}\textbf{x}=\textbf{b}\) where \(\textbf{A}\) is a square matrix:

Proof

Let \(I_j(\textbf{x}) \) be the identity matrix with column j swapped for \(\textbf{x}\). Then by matrix multiplication \(A I_j (\textbf{x}) = A_j (\textbf{b}) \) (Since \(A \textbf{x} = \textbf{b}\) ). Determinants have a multiplicative property, so therefore \(\text{det} (A) \text{det} (I_{j} (\textbf{x})) = \text{det} (A_j (\textbf{b})) \). Because \(\text{det}(A) \neq 0 \land \text{det} (I_j (\textbf{x})) = \textbf{x}_j\), Cramer's rule is proved.

Elementary row operations

In augmented matrixes, the following operations are legal:

These operations are essentially using information of simultaneous equations to reform the equations into a desired form

Gaussian elimination Eliminazione gaussiana ガウシアンの消去式

Through row operations, there is an algorithm running in \(O (n^3 ) \) to solve augmented matrixes as such:

  1. Forward phase - echelon form; Starting from the top row, for each row's pivot index, use a row operation to make all indexes directly below this index to equal zero (runs in \(O (n^3)\))
  2. Backwards phase - reduced echelon form; Starting from the bottom row, for each row's pivot index, use a row operation to make all indexes directly above this index to equal zero (runs in \(O (n^3)\))

Upper triangle matrix

Matrix with upper right corner filled with 0s

Lower triangle matrix

Matrix with lower left corner filled with 0s

Diagonal matrix

Matrix filled with 0s except for indexes where the column number and row number are the same

\( \textbf{M} \text{ is diagonal} \iff ( i \neq j \implies m_{ij} = 0) \)

\( \textbf{M} \text{ is diagonal} \implies \textbf{M}^k = \begin{bmatrix} m^{k}_{11} & 0 & \ldots & 0 \\ 0 & m^{k}_{22} & \ldots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \ldots & m^{k}_{nn} \end{bmatrix} \)

Leading entry

A leading entry is the first non-zero value in a row or column, for instance, you can find the leading entry for the 5th column, 2nd row, 3rd column ans so forth

Augmented matrix

Combining two matrixes together side by side; the augmentation of A and B is represented as \(\textbf{A}|\textbf{B}\). Its motive is primarily to be a neat layout when performing Gaussian elimination techniques.

Echelon form

Rectangular matrix following the following criteria that is to follow the following of this followed paragraph:

\(\begin{bmatrix}a & b & c \\ 0 & d & e \\ 0 & 0 & f\end{bmatrix}\)

Reduced echelon form

Along with satisfying echelon form, reduced echelon forms also follow the following:

Pivot point

When converting from echelon form to reduced echelon form, the pivot points are the leading entries of each row, which will all be turned into number 1s

Determinant Determinante 行列式

Scalar function \(\text{det}(\textbf{M}_n)\) defined to characterizes the existence of an inverse matrix for some square matrix \(\textbf{M}_n\), similar to how polynomial's discriminant characterises the existance of real roots.

\(\text{det} (\textbf{M}_n) = 0 \iff \textbf{M}_{n}^{-1} = \text{Undefined}\)

Geometrically, the determinant \(\text{det}(M_n)\) also returns the signed area/volume of matrix multiplying each vector representing vertexes of a unit \(n\)-cube by \(M_n\)

\(\text{det}(M_{2})=\begin{vmatrix} m_{11} & m_{12} \\ m_{21} & m_{22} \end{vmatrix}=m_{11}m_{22}-m_{12}m_{21}\)

\(\text{det}(M_{3})= m_{11}(m_{22}m_{33} - m_{23}m_{32}) -m_{12}(m_{21}m_{33} - m_{23}m_{31}) + m_{13}(m_{21}m_{33} - m_{22}m_{31}) \)

Cofactor

A scalar that is a part of an expansion to return a determinant

\(C_{ij}=(-1)^{i+j}\text{det} (\textbf{M}_{ij})\)

Where the \(\text{det} (A_{ij})\) notation means a matrix that contains every value in its same order so that it does not have the same i or j value

Laplace expansion

\(\text{det} (\textbf{M}_{n}) = \sum_{j=1}^{n}m_{ij}C_{ij} : (i \in \mathbb{N}) \land (i \leq n)\)

\(\text{det} (\textbf{M}_{n}) = \sum_{i=1}^{n}m_{ij}C_{ij} : (j \in \mathbb{N}) \land (j \leq n)\)

Determinant properties

Trace

Sum of all diagonal terms of a square matrix

\( \text{tr}(\textbf{M}) = \sum_{j=1}m_{jj} \)

Homogeneous systems

\(\textbf{A}\textbf{x}=\textbf{0}\)

Inhomogeneous systems

\(\textbf{A}\textbf{x}=\textbf{b} : \textbf{b} \neq \textbf{0}\)

\(\textbf{A}\textbf{x}=\textbf{0} \land A\textbf{y}=\textbf{b} \implies A ( \textbf{x} + \textbf{y} ) = \textbf{b}\)

Functions

Basics of functions

See discrete math

Relation

Mathematical condition that may hold between different elements of a set, examples include inequalities, equations, functions, diophantine equations etc.

Function Funzione 機能

A mathematical object \(f : X \to Y\) that takes an object in set \(X\) (domain) as an input and produces an output as some object in set \(Y\) (range)

Injectiivity Iniettivita 単射性

\(f \text{ is injective } \iff (f(x)=f(y) \implies x=y) \)

Inverse functions Funzioni inverse 反転機能

For some injective function \(f : X \to Y\), there is an inverse function \( f^{-1} : Y \to X\). When a function is not injective, one can restrict the domain to ensure an injection and then invert

\(f( f^{-1}(x) ) = x\)

Factorial Fattoriale 階乗

Function of the product of all natural numbers up to a specific integer, iwith \(0!=1\)

Recursive definition

\(n!=(n-1)!n\)

Series definition

\(n!=\prod_{k=1}^{n}k\)

Exponential function

\(\text{exp}(z)=e^{z}\)

\( \text{exp} : \mathbb{C} \to \mathbb{C}\)

Properties

Natural logarithm

Inverse function of the exponential function for real numbers

\( \ln : (0,\infty) \to \mathbb{R}\)

Properties

Trigonometric functions

Geometrically the trigonometric functions are interpreted as the set of \(2\pi\)-periodic functions that satisfy the relation \( x^2 + y^2 =1 \)

Analytically, they are defined as the solutions to the following IVP

  • \( y''=-y \)
  • Analytical properties

    \( \sin z = \sum^{\infty}_{n = 0} \frac{(-1)^n z^{2n+1}}{(2n+1)!} \)

    \( \sin z = \frac{e^{iz} - e^{-iz}}{2i} \)

    \( \sin (x+iy) = \sin x \cosh y + i \cos x \sinh y \)

    \( \cos z = \sum^{\infty}_{n = 0} \frac{(-1)^n z^{2n}}{(2n)!} \)

    \( \cos z = \frac{e^{iz} + e^{-iz}}{2} \)

    \( \sin (x+iy) = \cos x \cosh y - i \sin x \sinh y \)

    Hyperbolic function

    Complex trigonometric function

    \( \sin z = \sum^{\infty}_{n = 0} \frac{(-1)^n z^{2n+1}}{(2n+1)!} \)

    \( \sin z = \frac{e^{iz} - e^{-iz}}{2i} \)

    \( \sin (x+iy) = \sin x \cosh y + i \cos x \sinh y \)

    \( \cos z = \sum^{\infty}_{n = 0} \frac{(-1)^n z^{2n}}{(2n)!} \)

    \( \cos z = \frac{e^{iz} + e^{-iz}}{2} \)

    \( \sin (x+iy) = \cos x \cosh y - i \sin x \sinh y \)

    Inverse exponentials

    Inverse trigonometry

    Inverse identities

    Limit Limite 制限

    The value that \(f(x)\) approaches as \(x\) approaches \(a\), shown with \(\lim_{x \to a} f(x)=L\). More specifically, you can have:

    \(L=\lim_{x \to a} f(x) \iff \lim_{x \to a^{-}} f(x)=\lim_{x \to a^{+}} f(x)\)

    See Real Analysis for a more mathematically correct definition of a limit.

    Euler's number (\( e \))

    Irrational constant that is the result of an 'infinite exponent' to a base which is 'infinitely close' to 1

    \( e = \lim_{n \to \infty} (1 + \frac{1}{n})^{n} = \sum_{k=1}^{\infty} \frac{1}{n!}\)

    Function transforms

    \(bf[a(x-h)]+v\)

    Differentiation

    Differentiation

    \( f'(x) = \lim_{h\to 0}\frac{f(x+h)-f(x)}{h}\)

    Product rule

    \( (fg)'(x) = f(x)g'(x)+ f'(x)g(x) \)

    Quotient rule

    \( (\frac{f}{g})'(x) = \frac{f(x)g'(x) - f'(x)g(x)}{g^2(x)} \)

    Chain rule Regola di catena 連鎖律

    \( (f \circ g)'(x) = f'(g(x))g'(x)\)

    \(\frac{dy}{dx} = \frac{dy}{du} \frac{du}{dx}\)

    Inverse function rule

    \( (f^{-1})' \circ f = \frac{1}{f'} \)

    Constant term rule

    \(\frac{d}{dx}(c) = 0\)

    Power rule

    \(\frac{d}{dx}(x^{n}) = nx^{n-1}\)

    Exponential derivative

    \(\frac{d}{dx}(a^{f(x)}) = f'(x) \ln (a) a^{f(x)}\)

    Taylor series Serie di Taylor テイラー展開

    Series relating to a function that approximates and for analytical functions converges to said function

    \( f(x) = \sum_{n=0}^{\infty} \frac{f^{n}(a)(x-a)^n}{n!}\)

    See Real Analysis for more information on Taylor series.

    Trigonometric derivatives

    Logarithmic derivatives

    Newton's method

    Recursive formula for computing the roots of a function. It works based on the idea that the gradient from some place on the function to the place on that function that is a root should be the derivative (since the derivative is the function that represents a function's gradient). This means:

    \(f'(x_{n})=\frac{f(x_{n}) - 0}{x_n - x_{n+1}}\)

    \( \implies x_{n+1}=x_{n}-\frac{f(x_{n})}{f'(x_n)}\)

    Differential

    Infinitesimal relating to the difference of some variable. This concept is not rigorous, however produces accurate results for engineering, physics etc.

    \( \Delta x = x_1 - x_0 \)

    \( \Delta y = f(x_1) - f(x_0) \)

    \( dy = \frac{dy}{dx} dx\)

    \(\Delta y \approx \frac{dy}{dx} \Delta x\)

    Implicit derivatives

    For implicit equations, one can apply differentiation to the \(x\) and apply the chain rule when differentiatin \(y\)

    Tangent Tangente 接線

    A linear approximation of a function at a specific point. For instance at point \(x_{0}\):

    \(f(x)-f(x_{0})=f'(x_{0})(x-x_{0})\)

    Integration

    Antiderivative

    Considering the function \(f\), there may exist a function \(F\) such that when differentiated returns \(f\)

    \(F \text{is an antiderivative of } f \iff F' = f\)

    Indefinite integral

    Simply the antiderivative

    \(\displaystyle \int f(x)dx = F(x)\)

    Definite integral

    The area bound between function \(f\) and the baseline \(y=0\) on some interval \( (a,b) \)

    \(I = \int_{a}^{b} f(x)dx\)

    Riemann sum

    Numerical method of approximating integrals without employing the FTC. It is a consequence of Riemann's defintion of the integal; see Real Analysis

    \(\int^{b}_{a} f(x) dx = \lim_{n \to \infty} \sum_{i=0}^{n} f(x_{i}) \Delta x\)

    Trapezoidal rule

    Improved variant of the Riemann sum

    \(\int^{b}_{a} f(x) = \lim_{n \to \infty} \sum_{i=1}^{n} \frac{f(x_{i-1})+f(x_{i})}{2} \Delta x\)

    Fundamental theorem of calculus (FTC)

    Theorem asserting that the indefinite integral of a function is its anti-derivative. See Real Analysis

    \( \frac{d}{dx}\int_{0}^{x} f(x) dx = f(x) \)

    \(F' =f \implies \int_{b}^{a} f(x) dx = F(b)-F(a) \)

    Integral properties

    Even-odd integrals

    \( f(-x)=f(x) \implies \int_{-k}^{k}f(x)dx= 2 \int_{0}^{k}f(x)dx\)

    \( f(-x)=-f(x) \implies \int_{-k}^{k}f(x)dx= 0\)

    Integration by substitution

    Encapsulating a subfunction as a variable \(u\) and using the chain rule in reverse provides the following

    \( \int (f \circ u)(x) u'(x) dx = \int f(u) du \)

    Integration by parts

    Using the product rule in reverse provides the following

    \(\int uv'(x) dx = uv-\int vu' dx\)

    One generally gets better results if you choose \(u\) to be the factor that is more 'reducible', that has more of a dramatic change when derived. This acronym gives a good idea on what order:

    Logarithmic
    Inverse trigonometric
    Algebraic
    Trigonometric
    Exponential

    Partial fraction decomposition

    For rational functions of polynomials, algebraic decomposition the fraction is optimal for integration

    Let \(p(x),q(x)\) be polynomials such that \( \deg (p) \lt \deg (q) \)

    Distinct factors

    \(q(x) = \prod^{\deg (q)}_{n=1} q^{d}_{n}(x) : \deg (q^{d}_{n}) = d\)

    \(\frac{p(x)}{q(x)} = \sum_{n=1}^{\deg (q)} \frac{k_n(x)}{q^{d}_{n}(x)} : \deg (k_n) = \deg (p^{d}_n) -1\)

    To evaluate the coefficients of each \(k_n\), solve the equality with each \(x : p^{d}_{n}(x) = 0 \)

    Reduction formulae

    Recurrence relation relating an integral to other integrals that may be simpler to calculate.

    Wallis' integrals

    \( \displaystyle W_n = \int^{\frac{\pi}{2}}_{0} \sin^n (x) dx\)

    \( W_n = \frac{n-1}{n} W_{n-2}\)

    Numerator rationalisation

    Powers of Cosine and Sine

    \( \int \sin^m (x) \cos^n (x) dx\)

    Powers of Secant and Tangent

    \( \int \sec^m (x) \cos^n (x) dx\)

    Weierstrass substitution

    For trigonometric functions, the substitution \(t=\tan (\frac{x}{2})\) can be made with the following formulae being applied:

    Exponential integration

    \(\int e^{f(x)} dx = \frac{e^{f(x)}}{f'(x)} +C\)

    \(\int k^{f(x)} dx = \frac{k^{f(x)}}{f'(x) \ln (k)} +C\)

    Trigonometric integration

    Trigonometric substitution

    The following functions have the algebraic semblance of some trigonometric ratios (pythagorean theorem), and hence substitution of a trigonometric variable can be performed to prove the following.

    Volumes of rotation

    To find the volume of a function rotated along the x axis, you can use the following formula to find the volume

    \( \int^{a}_{b} \pi f(x)^2 dx\)

    This is because when rotating around the x axis, \( f(x) \) is a radius and the area of the circle it makes when rotating is as we know, \(\pi r^2\), and all those areas combined make the volume.

    Complex numbers

    Imaginary unit

    A number satisfying the following equations

    Complex number

    A number representing the sum of a real number and imaginary number. The set of these numbers is denoted as \(\mathbb{C}\). Technically, this is field extension of the real numbers by the imaginary unit \(\mathbb{R}(i)\)

    \(\mathbb{C} = \{ z : z = x + iy, x,y \in \mathbb{R} \} \)

    \(z = x+yi\)

    Conjugate

    Unary complex operator that inverts the sign of the imaginary part

    \(z = x+yi \implies \bar{z},z^{*} = x-yi\)

    \(z = x+yi \implies \bar{z},z^{*} = \Re(z)-i\Im(z)\)

    Modulus

    Real function describing the distance of a complex number from 0. It is also called the complex norm

    \(|x+iy|=\sqrt{x^{2}+y^{2}}\)

    \(|z|=\sqrt{ \Re(z)^2+\Im(z)^{2}}\)

    Complex algebra

    Algebra is the same as real numbers, \(i\) can be manipulated like an algebraic variable (subject to its unique rule). Two complex numbers are equal if their real and imaginary parts are both identical

    Argument

    \(\arg (z) = \theta \in (-\pi,\pi] : z = |z| e^{i \theta}\)

    Polar form

    To represent a complex number in cylindrical coordinates rather than cartesian, you can use the polar form:

    \(z = r(\cos (\theta)+i\sin (\theta)) = r \text{cis} (\theta) \)

    Euler's formula

    \(\forall z \in \mathbb{C} [ e^{i z} = \cos ( z ) + i \sin ( z ) ] \)

    Exponential form

    \(z = re^{i\theta}\)

    Basic Complex identities

    You should be able to prove all identites in this entire document

    Conjugate Complex identities

    Polar and Exponential Form Complex identities

    Complex triangle inequality

    \(||z| - |w|| \leq |z+w| \leq |z| + |w| \)

    Complex zeroes

    \(z^n = re^{i \theta}\)

    \(z = re^{\frac{i \theta + 2\pi k}{n}}, k \in \mathbb{N}\cap[0,n)\)

    Quadratic method

    Method to find complex zeroes of \(f(z)=ax^2 + bx + c\)

    \(z^2+\lambda^2=(z+i\lambda)(z-i\lambda)=0 \implies z=\pm i\lambda\)

    Complex square roots

    Simultaneous equations that the real and imaginary parts of a complex number satisfies to be a square root; however it is probably easier to use Euler's formula.

    \(z = x+iy \land w=a+ib \land z^2 = w \implies \)

    Argand geometry

    De Moivre's Theorem

    \( z^n = r^{n} e^{ni\theta}\)

    \( z^n = r^{n} (\cos (n\theta) + i\sin (n\theta) ) \)

    \( z^{\frac{1}{n}} = r^{\frac{1}{n}} e^{\frac{i\theta+2\pi k}{n}}\)

    \( k [0,n]\)

    Differential equations

    Order Ordine 順序

    The highest order derivative in a differential equation

    Degree Grado 程度

    The highest power applied to a derivative in a differential equation

    Differential equations

    Equations relating a function and its differential functions, for instance, \(y'=y\) is a differential equation representing functions where the first derivative of a function equals the function itself. The solution to this equation is \(y=ke^x\)

    There are multiple types of differential equations such as:

    Integrating factor

    Specially devised function multiplied to an equation to facilitate integration. It is used in the solution for first order linear equations

    First order separable equation

    \(y' =f(y)g(x)\)

    First order linear equation

    \(y' + q_1 (x) y = q_2 (x)\)

    Second order linear, homogeneous, constant coefficient equation

    \( y'' + m y' + n y = 0 \)

    Proof

    Initial condition Condizione iniziale 初期条件

    Additional condition that a DE's solution must satisfy (commonly the image at a certain domain element of the function or its derivatives)

    \(y(x_0) = y_0\)

    Boundary condition

    \(\begin{cases} y(x_0) = y_0 \\ y(x_1) = y_1 \end{cases} \)

    Initial Value Problem (IVP)

    Differential equation provided with an initial condition

    Boundary Value Problem (BVP)

    Differential equation provided with boundary condition

    Second order linear, constant coefficient equations

    \( y'' + m y' + n y = q (x) \)

    Sequences and series

    See Real Analysis

    Polynomials

    Basic factorising

    \( \sum ka_{n} = k \sum a_{n}\)

    Quadratic factorising

    \( f(x)=ax^2 + bx + c \land (\exists \alpha,\beta \in \mathbb{R}| \alpha\beta=ac \land \alpha+\beta=b) \implies f(x)=ax^2+\alpha x +\beta x + c=(ax+\alpha)(x+\beta) \)

    Quadratic formula

    \( \deg(P)=2 \land P(x)=0 \implies x=\frac{-b \pm \sqrt{b^2 - 4ac}}{2a}\)

    Proof

    To obtain a single variable, the proof hinges off the fact for the quadratic case, the square can be completed by adding the appropriate term, and in this case, adding \( \frac{b^2}{4a^2} \) to obtain \(x^2 + 2(\frac{b}{2a})x+ \frac{b^2}{4a^2} = (x+\frac{b}{2a})^2 \) is the ideal form without adding any terms that include \(x\). A formal proof is offered below.

    \(ax^{2}+bx+c=0\)

    \(x^{2}+\frac{b}{a}x+\frac{b^2}{4a^2}+\frac{c}{a}=\frac{b^2}{4a^2}\)

    \( (x+\frac{b}{2a})^{2}+\frac{c}{a}=\frac{b^2}{4a^2}\)

    \( (x+\frac{b}{2a})^{2}=\frac{b^2 -4ac}{4a^2}\)

    \( x+\frac{b}{2a}= \pm \sqrt{\frac{b^2 -4ac}{4a^2}}\)

    \( x=\frac{-b \pm \sqrt{b^2 - 4ac}}{2a}\)

    Discriminant

    A value that characterises the existance of real roots for a polynomial

    Quadratic discriminant

    \(\Delta = b^2 - 4ac\)

    \(\Delta = 0 \iff r_{0} \in \mathbb{R} \land r_1 \notin \mathbb{R}\)

    \(\Delta > 0 \iff r_{0},r_{1} \in \mathbb{R} \)

    \(\Delta < 0 \iff r_{0},r_{1} \notin \mathbb{R} \)

    Completing the square

    Quadratic case of binomial theorem (see Discrete Mathematics)

    \( x^2 +2kx + k^2 = (x+k)^2 \)

    \( x^2 -2kx + k^2 = (x-k)^2 \)

    Difference of two powers

    \( x^n - y^n = (x-y)(\sum_{k=0}^{n-1} x^{n-1-k} y^{k} ) \)

    Difference of two squares

    \( x^2 - y^2 = (x+y)(x-y) \)

    Polynomial generalization

    \(P(x)=\sum_{n=0}^{\deg (P)} a_{n} x^{n}\)

    \(P(x)=\prod_{n=0}^{\deg (P)} (x - c_{n})\)

    Polynomial degree

    Polynomial degree refers to the highest power in a polynomial

    Zero

    Domain element of a function that evaluates to zero, also called a root

    \(c \text{ is a zero of }f \iff f(c)=0\)

    Multiplicity

    Property of a zero regarding how many times a polynomial has that same root

    For instance, the polynomial \(P(x)=(x-6)^2 (x-2)\) has the root 6 with a multiplicity of 2

    Fundamental theorem of algebra

    A polynomial of degree \(n\) has \(n\) complex roots (not all may be real)

    \( \deg (p(x)) = n \iff p(x)= a\prod_{k=1}^{n} (x - z_k) : z_k \in \mathbb{C} \)

    Rational root theorem

    \(\exists \frac{p}{q} \in \mathbb{Q} : \gcd(p,q) = 1 \land P(\frac{p}{q}) = 0 \implies p|a_0 \land q|a_{\deg(P)}\)

    Proof

    \(P(\frac{p}{q})=0 \implies \)

    \(\sum_{n=0}^{\deg (P)} a_{n} (\frac{p}{q})^{n} = 0\)

    \(\sum_{n=0}^{\deg (P)} a_{n} p^{n} q^{\deg(P) -n} = 0\)

    \(p(\sum_{n=1}^{\deg (P)} a_{n} p^{n-1} q^{\deg(P) -n}) = -a_{0} q^{\deg (P)}\)

    By Euclid's lemma (see Discrete Mathematics), \(p | a_{0}\) since it cannot divide \(p^{\deg (P)}\) due to the assumption that \(\gcd(p,q)=1 \)

    \(q(\sum_{n=0}^{\deg (P) -1} a_{n} p^{n} q^{\deg(P) -n-1}) = -a_{\deg (P)} p^{n}\)

    By similar reasoning, \(q | a_{\deg(P)}\)

    Polynomial division

    To divide polynomials into form\(\frac{P(x)}{D(x)} = Q(x) + \frac{R(x)}{D(x)} \):

    Remainder theorem

    \( P(x) = Q(x)(x-a) + P(a) \)

    The remainder of dividing \(P(x)\) by \(x-a\) is \(P(a)\). This is because fundamentally \(P(x) = Q(x)(x-a) + r\) (r is a constant since the degree of the remainder is 0 according to the polynomial division theorem) and when x is a, \(P(a) = Q(a)(0) + r\), so the term multiplying the quotient and divisor zero out

    Factor theorem

    And as a natural extension to the remainder theorem, \((x-a)\) can only be a factor of \(P(x)\) if and only if \(P(a)=0\)

    \( (x-a) | P(x) \iff P(a)=0\)

    Polynomial conditions

    All zeroes theorem

    If a polynomial has the same amount of real zeroes as its degree, then the polynomial has some model \(P(x) = a \prod^{\deg (P)}_{k=1} (x-a_k)\), where \(a_{k}\) represents each zero. This is because if even one of these equations in the product series zero out due to x being equal to one of the \(a_{k}\), the whole polynomial zeroes out due to multplication of the rest of the function by zero

    Vieta's formulae

    For \(ax^2 + bx + c\), it can be shown with a bit of manipulation that with zeroes \(\alpha,\beta\):

    For \(ax^3 + bx^2 + cx + d\), it can be shown with a bit of manipulation that with zeroes \(\alpha,\beta,\gamma\):

    For \(ax^4 + bx^3 + cx^2 + dx + e\), it can be shown with a bit of manipulation that with zeroes \(\alpha,\beta,\gamma,\delta\):

    Let \(d= \deg(P(x))\), \(r_{ij}\) is the sequence of the combinations of \(i\) zeroes in multiplication, \(a_n\) be a sequence of polynomial coefficients and \(k \in \mathbb{N} \cap [1,\deg(P(x))]\)

    \( \sum_{i=1}^{\binom{d}{k}} (\prod^{k}_{j=1} r_{ij}) = (-1)^{k} \frac{a_{d-k}}{a_{d}} \)

    Multiplicity of a zero theorem

    If a zero has a multiplicity higher than one, it is a stationary point

    A zero has a multiplicity of m if \(P(x) = (x-a)^{m} Q(x), Q(a) \neq 0\). It can be shown with calculus that zeroes with multiplicity above 1 are stationary points. Because with some deriving and substituting you end up at \( P'(x) = (x-a)^{m-1}R(x) \) where \(R(x) = mQ(x)+(x-a)Q'(x)\), and since we know that \(Q(a) \neq 0\) we can substitute to find that \(R(a) \neq 0\). This proves that the \((x-a)^{m-1}\) is doing the zeroing out, and so these roots are also stationary points. And intuitively, if the multiplicity is even then the stationary point is not a turning point

    Polynomials as linear and irreducable quadratic factors

    All polynomials with real coefficients can be written as the product of linear factors or irreducible quadratics (y'know, in the case that there are no zeroes). A basic form of this as well as proof is below:

    When all zeroes are real, \( P_{n} (x) = a_{n} \prod_{k=1}^{n} (x-\alpha_{k}) \)

    When all zeroes are complex, \( P_{n} (x) = a{n} \prod_{k=1}^{\frac{n}{2}} (x-\alpha_{k})(x-\overline{\alpha_{k}}) \implies a{n} \prod_{k=1}^{\frac{n}{2}} (x^2-2 \Re (\alpha_{k})x) + |\alpha_{k}|^2) \)

    With real and complex zeroes, \( P_{n} (x) = a_{n} \times (\prod_{k=1}^{j} (x-\alpha_{k})(x-\overline{\alpha_{k}})) (\prod_{l=2j+1}^{n} (x-\alpha_{l})) \) (The first pi product deals with all complex roots with their conjugate pair in one iteration, this is why we start the real roots at \(2j+1\). We can see from the case where all zeroes are complex that this also reduces to linear factors and irreducible quadratics)

    Chebyshev polynomials

    First kind

    Sequence of polynomials \(T_n\) relating to cosine relations of an angle with factor \(n\)

    \(T_n : T_n ( \cos(\theta) ) = \cos (n \theta) \)

    Second kind

    Sequence of polynomials \(U_n\) relating to sine relations of an angle with factor \(n\)

    \(U_n : U_{n-1} ( \cos(\theta) ) \sin (\theta) = \sin (n \theta) \)

    Legendre polynomials

    Sequence of polynomials \(P_n\) forming an orthogonal basis for \(L^2 [-1,1]\) with weight \(1\)