\(\forall a,b \in \mathbb{R}, |a+b| \leq |a|+|b| \)
\( |a+b|^2 = (a+b)^2 = a^2 +2ab +b^2\)
\( (|a|+|b|)^2 = a^2 +2|a||b| + b^2\)
\( \implies |a+b| \leq |a|+|b| \)
\( ( \sum^{k}_{n=1} a_n b_n )^{2} \leq ( \sum^{k}_{n=1} a_n^2 )( \sum^{k}_{n=1} b_n^2 ) \)
Creating a new set from two sets by mapping each element in \(A\) to all elements in \(B\) by an ordered pair.
\( A \times B = \{ (a,b) : a \in A , b \in B \} \)
\( |A \times B| = |A||B| \)
\( S \text{ is bounded } \iff \exists M \gt 0 (\forall s \in S ( |s| \leq M))\)
\( M \text{ is an upper bound of } S \iff \forall s \in S( M \geq s) \)
\( M \text{ is a lower bound of } S \iff \forall s \in S( M \leq s) \)
\( \max (U) \) represents the largest element in \(U\)
\( \min (U) \) represents the smallest element in \(U\)
The smallest upper bound of a set in a partially ordered space
\( \sup (U) = \min \{ M : \forall u \in U, M \geq u \} \)
The largest lower bound of a set in a partially ordered space
\( \inf (U) = \max \{ m : \forall u \in U, m \leq u \} \)
Property of a space such that there does not exist a value that is infinitely large or small (there is always a number less than and greater than each number). This is an axiom for the real numbers.
\( \forall M \in \mathbb{R} ,\exists N \in \mathbb{N} : M \lt N\varepsilon \)
Property of a space such that all sets formed from that space bounded above have a supremum. This is an axiom for the real numbers.
\(\forall S \subset \mathbb{R} \setminus \emptyset : S \text{ is bounded above}, \exists r \in \mathbb{R} : r = \sup (S)\)
Ordered, enumerated, countable collection of objects that allows for repetition
Formally it is defined as a function \( (x_n) : I \to X\)
Brackets enclose either the terms of the sequence or a symbol a subscript for indexing.
\( (1,1,2,3,5,8,13) \)
\( (F_n)^{6}_{n=0} \)
\( (F_n)^{\infty}_{n=0} \)
\( (F_n)_{n \in \mathbb{N}} \)
\( (1,1,2,3,5,8,13,\cdot) \)
Or for short hand, \( (F_n) \)
\( (x_n) \text{ is monotone increasing } \iff \forall n( x_{n+1} \geq x_{n}) \)
\( (x_n) \text{ is monotone decreasing } \iff \forall n( x_{n+1} \leq x_{n}) \)
Sequence formed from a countably infinite subset of terms from another sequence. Notationally, this is represented as the original sequence with the subscript being some 'monotone increasing indexing sequence' that jumps only to the terms that this subsequence includes.
Sequence that has some of its indexes skipped.
\( (x_{n_{k}}) \)
\(\varepsilon\) is a conventional symbol used to represent a variable with the following conditions:
Essentially, this is a variable that can be made arbitrarily close to 0, emulating infinitesimals. This will be necessary for defining limits
\( x_n \text{ is convergent } \iff \exists L : \lim_{n \to \infty} x_{n} = L \)
Sequence property such that a sequence term can be chosen such that all subsequent terms are arbitrarily close to \(L\) (the sequence gets closer and closer to \(L\))
\(\lim_{n \to \infty} x_n = L \iff (\forall \varepsilon ( \exists N : n\geq N \implies |x_n - L| \lt \varepsilon))\)
\(\lim_{n \to \infty} x_n = L \iff (\forall U : L \in U \subset \mathbb{R}, \exists N : n\geq N \implies x_{n} \in U))\)
Converges iff for any neigborhood of \(L\) there is a sequence term such that all subsequent terms are in that neighborhood
\(\lim_{n \to \infty} x_n = x \land \lim_{n \to \infty} y_n = y \implies\)
\( a_n \text{ is convergent } \iff \exists! L : \lim_{n \to \infty} x_{n} = L \)
\(a_n \text{ is convergent } \implies a_{n} \text{ is bounded}\)
By convergence, there is some \(N\) such that \( n \geq N \implies | x_n - L| \lt 1\)
One can construct \( M = \max \{x_1 , x_2, ..., x_{N}, |L|+1\} \implies |x_n| \lt M \implies x_n \text{ is bounded }\)
Theorem asserting that boundedness of a sequence ensures the existence of a convergent subsequence
\( x_{n} \text{ is bounded } \implies \exists x_{n_{k}} : x_{n_{k}} \text{ is convergent }\)
\(a_{n} \text{ is bounded } \implies \exists a_{b_{n}} : a_{b_{n}} \text{ is monotone } \)
\( a_{b_{n}} \text{ is monotone and bounded } \implies a_{b_{n}} \text{ is convergent}\)
Property of a space such that all sequences have convergent subsequences in the space. In the case of subsets of Euclidean space \(\mathbb{R}^n\), this is characterized by BWT, however in the set of continuous functions \(C\), this is characterized by AAT (see Lebesgue Integration and Fourier Analysis)
\(S \text{ is compact } \iff \forall (x_n) \in S [ \exists (x_{n_{k}}) ( (x_{n_{k}}) \text{ is convergent })]\)
\( (x_{n}) \text{ is Cauchy} \iff (\forall \varepsilon \gt 0( \exists N : n,m\geq N \implies |x_n - x_m| \lt \varepsilon))\)
Cauchy sequences are an alternative definition for convergent sequences that allow proof of convergence without knowing the actual limit
\(x_{n} \text{ is Cauchy} \iff \lim_{n \to \infty} x_n = L\)
Property of a space such that all Cauchy sequences have a limit within the space, that is, the space includes all limit points
\(S \text{ is complete} \iff forall \{x_n\} \subseteq S : \{x_n\} \text{ is Cauchy }, \lim_{n \to \infty} \{x_{n}\} \in S\)
\( \lim_{n \to \infty} a_{n} = L \land \lim_{n \to \infty} c_{n} = L \land b_{n} \in [a_{n},c_{n}] \implies \lim_{n \to \infty} b_{n} = L\)
\( \lfloor x \rfloor = \sup \{ m \in \mathbb{Z} : m \leq x \} \)
\( \lceil x \rceil = \inf \{ m \in \mathbb{Z} : m \geq x \} \)
Way of defining any real number by partitioning \(\mathbb{Q}\) by some inequality, for instance \(\sqrt{2}\) can be represented by the dedekind cut \((A,B) : A=\{a \in \mathbb{Q} : a^2 \lt 2 \lor a \lt 0\},B=\{ b \in \mathbb{Q} : b^2 \geq 2 \land b \geq 0 \} \)
Sequences that build by summation on the previous terms.
\( S_n \text{ is a series } \iff S_n = \sum^{n}_{k=1} a_k\)
A series is \(\sum_{n=1}^{\infty} |a_{n}| = S\)
\(\sum_{k=1}^{n} a_{k} \text{ is absolutely convergent } \iff \sum_{k=1}^{\infty} |a_{k}| \lt \infty \)
\(S_n \text{ is conditionally convergent } \iff S_n \text{ is convergent} \land \neg( S_n \text{ is conditionally convergent})\)
\( H_{n} = \sum_{k=1}^{n} \frac{1}{k}\)
\( \sum^{n}_{k=1} H_{k} = (n+1)H_n -n\)
\( S_{n} = \sum_{k=1}^{n} ar^{k-1} \)
\( S_{n} = \frac{a(r^{n+1}-1)}{r-1} \)
\( |r| \lt 1 \implies \lim_{n \to \infty }S_{n} = \frac{a}{1-r}\)
\(\forall r \in \mathbb{C} [ |r| \lt 1 \implies \lim_{n \to \infty }S_{n} = \frac{a}{1-r} ] \)
\( \sum_{n=1}^{\infty} \frac{1}{n^2} = \frac{\pi^2}{6} \)
Constant representing the converging difference between the harmonic series and natural logarithm
\( \gamma = \lim_{n \to \infty} H_{n} -\ln (n) \)
Given a series \(S = \sum_{k=1}^{\infty} a_{k}\), if you can form it into a sequence of partial sums \(x_{n} = \sum_{k=1}^{n} a_{k} \), and that sequence converges
If \(\sum_{n=1}^{\infty} a_{n}\) is convergent and there exists some term \(N\) such that any \(n \geq N\) means that \(a_n \geq b_n\), then \( \sum_{n=1}^{\infty} b_{n}\) is convergent too
\( \sum^{\infty}_{k=1} a_{k} \lt \infty \land \exists N : \forall n \geq N, a_n \geq b_n \implies \sum^{\infty}_{k=1} b_{k} \lt \infty\)
If the ratio of two series with strictly positive terms is convergent to some non zero constant, either both series converge or diverge
\( \lim_{n \to \infty} \frac{a_{n}}{b_{n}} = L \land L \neq 0 \implies \text{both convergent} \lor \text{both divergent}\)
Proving that the sequence converges by proving that the ratio of an iteration and the previous iteration is between 1 and -1 like such:
\(S = \sum_{n=0}^{\infty} a_n \text{ is convergent}\)
\(\lim_{n\to \infty} |\frac{a_{n+1}}{a_{n}}| = L\)
To prove for \(L \lt 1\), comparison to the geometric series will be made. \(L \lt 1 \implies \exists r: r \in (L,1) \). Then \(r - L\) can be used as an infinitesimal such that \(\exists N : \forall n \geq N, |\frac{a_{n+1}}{a_{n}} - L| \lt r-L\). Now by making double bounds to remove the absolute value, \( L-r \lt \frac{a_{n+1}}{a_{n}} - L \lt r-L \implies a_{n+1} \lt ra_{n}\). One can easily inductively show that \(a_{n} \lt r^{k}a_{n-k} \). Therefore there exists some \(N=n-k\) such that \(\forall n \geq N, a_{n} \leq \frac{a_{N}}{r^N} r^n \). This implies that each \(a_{n}\) is less than some geometric series term and since \(r \lt 1\), by the comparison test convergence is implied.
For the case \(L \gt 1\), consider instead \(L \gt 1 \implies \exists r: r \in (1,L) \), then similarly \( r-L \lt \frac{a_{n+1}}{a_{n}} - L \lt L-r \implies ra_{n} \lt a_{n+1} \) and generally \(a_{n} \gt r^{k}a_{n-k} \). The proof procedes similarly, however note that now \(r \gt 1\), which violates the radius of convergence for geometric series and therefore the series is divergent.
In essence, introduce a geometric term \(r \lt 1\), use this to form an infinitesimal and then make comparison to the geometric series
\(S = \sum_{n=0}^{\infty} (-1)^{n}a_{n}\) converges when:
The intuition behind this is that if the sequence in the series is monotone decreasing and approaching zero, it is oscillating (due to the nature of being alternate) and the oscillation is shrinking to a certain point since it is monotone decreasing towards zero.
Where the supremum denotes the smallest upper bound of a set, limit supremums denote the largest subsequential limit of a sequence (the largest value a subsequence of a given sequence can converge to), denoted as \(\limsup(x_n)\) or \(\lim_{n \to \infty}\sup \{x_n\}\)
\( \limsup \{x_n\} = \liminf\{x_n\} \implies \text{convergence}\)
\(S = \sum_{n=0}^{\infty} a_n\) converges when:
\( \limsup ( |a_{n}|^{\frac{1}{n}})=L\)
The proof for this relies on comparison test with a convergent geometric series for \(L \lt 1\), since a convergent geometric series may have terms of form \(r^{n}\) and we can delcare some \(r:L \lt r \lt 1\), which means \(|a_n|^{\frac{1}{n}} \lt r \implies |a_n| \lt r^n\)
\(S = \sum_{n=0}^{\infty} \frac{1}{n^{p}}\) converges for \(p \gt 1\)
If a sequence is positive and non-increasing, then:
\( \sum_{n=1}^{\infty}a_n \text{ converges} \iff \sum_{n=0}^{\infty}2^{n}a_{2^{n}} \text{ converges} \)
The idea is because if it is non-increasing we can see that \(a_1 + a_2 \leq 2a_1 \implies \sum_{n=1}^{k} a_n \leq ka_1\), so we can extend this such that \(\sum_{n=1}^{\infty} a_n \leq a_1 + 2a_2 + 4a_4 + ...\) and that's how we obtain the inequality \(\sum_{n=1}^{\infty}x_n \leq \sum_{n=0}^{\infty}2^{n}x_{2n}\), and well if that series with the powers of 2 is bounded, and our series of interest is always less than that, then it converges too
\( \sum_{n=1}^{\infty}x_n \leq \sum_{n=0}^{\infty}2^{n}x_{2^{n}} \lt 2\sum_{n=1}^{\infty}x_n \)
When \(f\) is positive, decreasing and continuous...
\(\sum_{n=1}^{\infty} f(n) \lt \infty \iff \int_{1}^{\infty} f(x)dx \lt \infty\)
Concept that if \(x\) tends to \(x_0\), \(f\) converges to some value \(L\)
\( \lim_{x \to x_0}f(x) = L \iff |x - x_{0}| \lt \delta \implies |f(x)-L| \lt \varepsilon \)
\( \lim_{x \to a^{+}} =L \iff a \lt x \lt a+\delta \implies |f(x) - L| \lt \varepsilon \)
\( \lim_{x \to a^{-}} =L \iff a-\delta \lt x \lt a \implies |f(x) - L| \lt \varepsilon \)
\( \lim_{x \to a} =L \implies \lim_{x \to a^{+}} = \lim_{x \to a^{-}} = L\)
\( \lim_{(x,y) \to (x_0,y_0)}f(x,y) = L \iff \sqrt{(x - x_{0})^2 + (y-y_0)^2} \lt \delta \implies |f(x,y)-L| \lt \varepsilon \)
\( \lim_{(x,y) \to (0,0)}f(x,y) = \lim_{r \to 0^{+}} f(r\cos(\theta), r \sin (\theta) \)
Parsing the output of one function to the input of another, nesting a function in another
\( (f \circ g)(x) = f(g(x)) \)
Function with the property that domain elements that are arbitrarily close map to image elements that are arbitrarily close
\( f \text{ is continuous on }U \iff \forall u \in U (f \text{ is continuous at }u) \)
\(U\) is a set
\( f \text{ is continuous at }x \iff [ \lim_{t \to x} f(t) = f(x)] \)
\( f \text{ is continuous at }x \iff [ \lim_{n \to \infty} x_{n} = x \implies \lim_{n \to \infty} f(x_n) = f(x)] \)
\( f \text{ is continuous at }x_0 \iff \forall \varepsilon \exists \delta_{x} \gt 0 ( |x_0 - x| \lt \delta_{x} \implies |f(x_0)-f(x)| \lt \varepsilon ) \)
Note that \(\delta_{x}\) is dependent on each domain value \(x\)
\( f \text{ is continuous at }x_0 \iff \forall N_1 (f(x_0)) [ \exists N_2(x_0) [ x \in N_2 (x_0) \implies f(x) \in N_1 (f(x_0)) ] ]\)
\( f \text{ is continuous on }\text{dom}(f) \iff \forall U \subset \text{codom}(f) ( U \text{ is open } \implies f^{-1}(U) \text{ is open }) \)
\( f \text{ is continuous on } U \iff \forall V \subset f(U) ( V \text{ is open } \implies f^{-1}(V) \text{ is open }) \)
Assume the epsilon-delta definition satisfies for \(x\) and let \(y_n\) be a sequence approaching \(x\). This implies \( (\varepsilon , \delta_{x} ) \) can be chosen such that \(\exists N : n \geq N \implies |x-y_{n}| \lt \delta_{x} \implies |f(x) -f(y_n)| \lt \varepsilon\), however since these statements are equivalent to the definition of a limit. Hence \(\text{Epsilon-delta definition} \implies \text{Limit definition}\)
Assume the epsilon-delta definition fails for \(x\). Then even our sequence \(y_n\) may demonstrate \(\exists N : \forall n \geq N, |f(x) -f(y_n)| \geq \varepsilon\), which represents a sequence \(f(y_n)\) not approaching \(f(x)\), hence \(\neg \text{Epsilon-delta definition} \implies \neg\text{Limit definition}\) and therefore \(\text{Epsilon-delta definition} \iff \text{Limit definition}\)
\(f,g \text{ are continuous } \implies (f \circ g) \text{ is continuous}\)
\(f\text{ is monotone on } (a,b) \implies f \text{ is continuous with countable discontinuous points}\)
Notion of continuity on an open set \(U\) such for any epsilon, there is a single delta that can prove continuity for all elements of \(U\) simultaneously
\( f \text{ is uniformly continuous on }U \iff \forall \varepsilon \exists \delta [ \forall u,v \in U ( |u - v| \lt \delta \implies |f(u)-f(v)| \lt \varepsilon )] \)
Note that \(\delta\) is independent of domain values
\(f \text{ is uniformly continuous } \implies f \text{ is continuous}\)
\(f \text{ is continuous on } [a,b] \iff f \text{ is uniformly continuous on } [a,b]\)
\(f\) is Lipschitz continouous if some \(M\) that satisfies the below inequality. \(M\) can be intuitively thought of as a 'maximum gradient'
\(f \text{ is Lipschitz continuous } \iff \exists M \gt 0 : |f(x) - f(y)| \leq M|x-y|\)
All cotninuous functions on a closed interval have a maxima and minima
\( f \text{ is continuous on } [a,b] \implies \exists c,d : f(c) \leq f(x) \leq f(d), \forall x \in [a,b]\)
Theorem asserting that all functions continuous on an interval with a positive range element an negative range element have some point \(c\) mapped to a range element \(0\) (note that the construction \(f(a)f(b)\) is negative iff \(f(a)\) and \(f(b)\) have different signs)
\(f \text{ is continuous on } [a,b] \land f(a)f(b) \lt 0 \implies \exists c \in [a,b] : f(c)=0\)
\( f \text{ is differentiable on } U \iff \forall x \in U [ f \text{ is differentiable at }x ] \)
\( f \text{ is differentiable at }x_0 \iff \exists f'(x_0) [ f'(x_0) = \lim_{x \to x_0} \frac{f(x) - f(x_0)}{x-x_0} ]\)
Scalar quantity evaluating the gradient of a differentiable function at a point.
represented by the Newtonian quotient; the ratio of the increase in range for an increase in domain for infinitesimally small values
\( f'(x) = \lim_{h \to 0} \frac{f(x+h) - f(x)}{h} = \lim_{t \to x} \frac{f(t) - f(x)}{x-t} \)
\( f \text{ is differentiable on } X \implies f \text{ is continuous on } X\)
\( (fg)'(x) = f(x)g'(x)+ f'(x)g(x) \)
\( (fg)'(x_0) = \lim_{x \to x_0} \frac{f(x)g(x) - f(x_0)g(x_0)}{x - x_0} \)
\( (fg)'(x_0) = \lim_{x \to x_0} \frac{f(x)g(x) - f(x)g(x_0) + f(x)g(x_0)- f(x_0)g(x_0)}{x - x_0} \) (add and subtract)
\( (fg)'(x_0) = \lim_{x \to x_0} f(x)\frac{g(x) - g(x_0)}{x-x_0} + g(x_0)\frac{f(x)- f(x_0)}{x - x_0} \) (factorize)
\( (fg)'(x_0) = f(x)g'(x)+ f'(x)g(x) \) (solve limit)
\( (f \circ g)'(x) = f'(g(x))g'(x)\)
\( (f \circ g)'(x_0) = \lim_{h \to 0} \frac{f(g(x+h)) - f(g(x))}{h}\)
\( (f \circ g)'(x_0) = \lim_{h \to 0} \frac{f(g(x+h)) - f(g(x))}{h} \frac{g(x+h) - g(x)}{g(x+h) - g(x)}\) (multiply by conjugate)
\( (f \circ g)'(x_0) = \lim_{h \to 0} \frac{f(g(x) + k) - f(g(x))}{k} \frac{g(x+h) - g(x)}{h}\)
\( (f \circ g)'(x) = f'(g(x))g'(x)\)
\( (\frac{f}{g})'(x) = \frac{f(x)g'(x) - f'(x)g(x)}{g^2(x)} \)
\( (\frac{f}{g})'(x) = f(x)(\frac{1}{g(x)})' + \frac{f'(x)}{g(x)} \) (product rule)
\( (\frac{f}{g})'(x) = \frac{f(x)g'(x) - f'(x)g(x)}{g^2(x)} \) (chain rule)
In mutlivariable calculus, with a function \(f : \mathbb{R}^2 \to \mathbb{R}\)
\( \frac{\partial f}{\partial x}(x,y) = \lim_{h \to 0} \frac{f(x+h,y) - f(x,y)}{h}\)
\( \frac{\partial f}{\partial y}(x,y) = \lim_{k \to 0} \frac{f(x,y+k) - f(x,y)}{k}\)
A maxima is reached when \(f'(c) = 0\)
On a differentiable interval \( I = (a,b) \) the following holds:
\( f(a)=f(b)=0 \implies \exists c \in (a,b) : f'(c)=0 \)
Between two equal points of a function, there is at least one point where the derivative is zero
For all functions \(f\) on a differentiable interval \(I\), there exist some element \(c\) such that \(f'(c)\) represents a line from \(f(a)\) and \(f(b)\), i.e, the 'mean derivative'
\( I = (a,b), f \in C^{1}(I) \implies \exists c \in I : f'(c)=\frac{f(b)-f(a)}{b-a} \)
\( f(a)=f(b) \implies \frac{f'(c)}{g'(c)}=\frac{f(b)-f(a)}{g(b)-g(a)} : c \in (a,b) \)
L'Hôpital's rule asserts that when \( \lim_{x \to a^{+}} f(x) = \lim_{x \to a^{+}} g(x) = 0, \pm \infty\), the ratio of these two functions tends to the same value as the ratio of their derivative functions
\(\lim_{x \to a}f(x) = \lim_{x \to a} g(x) = 0, \pm \infty \land f,g \in C^{1} \implies \lim_{x \to a} \frac{f(x)}{g(x)} = \lim_{x \to a} \frac{f'(x)}{g'(x)} \)
\(f \in C^{1}(I) \text{ is injective } \land f'(a) \neq 0, a \in I \implies (f^{-1})'(f(a)) = \frac{1}{f'(a)}\)
Injectivity of \(f\) implies existence of a continuous \(f^{-1}\), hence
\( (f^{-1})'(f(a)) = \lim_{x \to a} \frac{f^{-1}(f(x)) - f^{-1}(f(a))}{f(x)-f(a)} \)
\( (f^{-1})'(f(a)) = \lim_{x \to a} \frac{x - a}{f(x)-f(a)} \)
\( (f^{-1})'(f(a)) = \frac{1}{f'(a)}\)
Class of functions such that between any two range elements \(x,y\) all range elements are less than the line connecting \(x\) and \(y\)
\(f \text{ is convex } \iff ( a+b=1 \implies f(ax+by) \leq af(x)+bf(y) )\)
\(f \text{ is convex on open interval } \implies \exists f'(x^{+}), f'(x^{-}) : f'(x^{+}) = \lim_{h \to 0^{+}} \frac{f(x+h)-f(x)}{h}, f'(x^{-}) = \lim_{h \to 0^{-}} \frac{f(x+h)-f(x)}{h}\)
\(f \text{ is convex on open interval } I \implies f \text{ is Lipschitz continuous on } I\)
\(f \text{ is convex on on open interval } I \implies f \in C^{1}(I) \text{ with countable indifferentiable points}\)
\(f \in C^{1}(I) \implies (f \text{ is convex on } I \iff f' \text{ is monotone increasing on } I) \)
\(f \text{ is convex } \iff ( a+b=1 \implies f(ax+by) \geq af(x)+bf(y) )\)
Infinite series of the following form
\( \sum_{n=0}^{\infty} a_n (x-c)^{n} \)
When a power series converges iff \(|x - c| \lt R\), \(R\) is said to be the radius of convergence, series of these forms can always have the ratio test applied, hence:
\(R = ( \lim_{n \to \infty } |\frac{a_{n+1}}{a_{n}}| )^{-1}\)
\( ( f(x) = \sum_{n=0}^{\infty} a_n (x-c)^{n} \lt \infty \iff |x-c| \lt R) \implies (f'(x) = \sum_{n=1}^{\infty} n a_n (x-c)^{n-1} \lt \infty \iff |x-c| \lt R) \)
\( ( f(x) = \sum_{n=0}^{\infty} a_n (x-c)^{n} \lt \infty \iff |x-c| \lt R) \implies (f'(x) = \sum_{n=1}^{\infty} n a_n (x-c)^{n-1} \lt \infty \iff |x-c| \lt R) \)
Power series relating to a function
analytic functions equal their own Taylor series
\( f \sim \sum_{n=0}^{\infty} \frac{f^{(n)}(c)(x-c)^{n}}{n!}\)
Its construction can be though of as a polynomial such that at the expansion point, all orders of derivatives equal the function it models
Theorem stating a property that the Taylor series error (denoted \(F(x_0)\)) has. When this property is established, it can be noted that \(\lim_{n \to \infty} F(x_0) = 0\) (when approaching infinite terms, the Taylor series equals the function)
\( P_n(x) = \sum_{k=0}^{n} \frac{f^{k}(x_0)(x-x_0)^{k}}{k!} \)
\( f \in C^{n+1}(I) \implies \exists \xi \in [x,x_0] : f(x) = P_n(x)+ \frac{f^{(n+1)}(\xi)(x-x_0)^{n+1}}{n!}\)
Consider the error of a \(C^{n}(I)\) Taylor approximation when using \(t\) as the point of expansion \( F(t) = f(x) - P_{n}(t) \). Note the following:
One desires some constructed function \(G\) in terms of the error function such that \(G(x)=G(x_0)=0\) (in order to apply Rolle's theorem to bound the error)
One such construction is \(G(t) = F(t) - F(x_0)(\frac{x-t}{x-x_0})^{n+1}\) and teh one collates some properties to verify that we can apply Rolle's theorem with \(G\), and finds the \(G'\) to employ in Rolle's theorem
Rolles' theorem says \(\xi \in [x,x_0] : -\frac{f^{(n+1)}(\xi)(x-\xi)^{n}}{n!} + F(x_0)\frac{(n+1)(x-\xi)^{n}}{(x-x_0)^{n+1}} = 0\)
Rearranging proves that \(F(x_0) = \frac{f^{(n+1)}(\xi)(x-x_0)^{n+1}}{(n+1)!}\)
\( \sin (z) = z \prod_{n=1}^{\infty} (1-(\frac{z}{n\pi})^2)\)
Function that equals its Taylor series on \(U\)
\(f \text{ is analytic on }U \iff \forall u \in U [f(u) = \sum_{n=0}^{\infty} \frac{f^{(n)}(c)(x-c)^{n}}{n!} ]\)
Property of functions measured by differentiability classes; sets of functions that hold the same type of differentiabiliy
\(f \text{ is smooth on }U \iff f \in C^{\infty}(U)\)
\(f \text{ is continuously differentiable on }U \iff f \in C^{1}(U)\)
Strictly increasing finite sequence starting at the initial point of the interval and arriving at the final point of the interval
\( (x_i)^{n}_{i=0} \text{ is a partition of }[a,b] \iff \)
A partition may conveniently also refer to the set of the terms formed by this sequence
Partition where each term is equally spaced from one another
\(x_{i} = a + \frac{(b-a)i}{n} \text{ is a uniform partition of }[a,b]\)
Largest distance between terms in a partition
\( \| \mathcal{P} \| = \max_{i \in \mathbb{N} \cap [1,|\mathcal{P}|]} ( x_i -x_{i-1} ) \)
Function that on some interval of its image can have its integral evaluated by partitioning the interval to arbitrary precision.
\( f \text{ is Riemann integrable on } [a,b] \iff \overline{\int_{b}^{a}} f = \underline{\int_{b}^{a}} f\)
Alternative definition of a Riemann integrable function
\( f \text{ is Riemann integrable on } [a,b] \iff \forall \varepsilon \exists \mathcal{P} \subset [a,b] ( |U(f,\mathcal{P}) - L(f, \mathcal{P})| \lt \varepsilon\)
Scalar quantity relating to area bound by a Riemann integrable function on an interval.
\( \int^{b}_{a} f(x)dx = \overline{\int_{b}^{a}} f = \underline{\int_{b}^{a}} f \)
\( \int^{b}_{a} f(x)dx = \lim_{n \to \infty} \sum_{i=1}^{n} f(x_i) \Delta x_i\)
\( \int_{a}^{b} cdx = c(b-a)\)
\( |\int_{a}^{b} f(x)dx| \leq \int_{a}^{b} |f(x)| dx\)
\(f \text{ is continuous on } [a,b] \implies f \text{ is Riemann integrable on } [a,b]\)
\( \int_{a}^{b} (f(x)+g(x))dx = \int_{a}^{b} f(x)dx + \int_{a}^{b} g(x))dx\)
\(f \text{ is continuous on } [a,b] \implies \frac{d}{dx} \int_{a}^{x} f(t)dt = f(x)\)
\(f \text{ is Riemann integrable on } [a,b] \implies \int_{a}^{b} f(x)dx = F(b)-F(a) : F'=f\)
Consider \(F(x) = \int_{a}^{x} f(t)dt\) and note the following:
\(|F(x) -F(y)| \leq \int_{y}^{x} |f(t)|dt \leq \int_{y}^{x} Mdt = M|x-y| \implies F \text{ is Lipschitz continuous on } [a,b] \implies F \text{ is continuous on } [a,b]\)
Due to continuity of \(f\) and \(F\), one can choose \((\varepsilon , \delta_{x})\) to show that the \(F'=f\) as such, letting \(|x-y| \lt \delta_{x}\)
\( |\frac{F(x)-F(y)}{x-y} - f(y)| \leq \frac{1}{|x-y|} \int^{x}_{y} |f(t) - f(y)|dt \leq \frac{1}{|x-y|} \int^{x}_{y} \varepsilon dt = \varepsilon \implies F'=f\)
Consider \(G(x) = \int^{x}_{a} f(t)dt\) and note the following:
\(G(b) -F(b) = G(a) - F(a) \implies G(b) -G(a) = F(b) -F(a) \implies \int_{a}^{b} f(x)dx = F(b)-F(a)\)
\( (fg)'(x) = \int f(x)g'(x)+ f'(x)g(x) \implies \)
\( \int f(x)g'(x) dx = (fg)(x) - \int f'(x)g(x) dx \)
\( \int (f \circ u)(x) u'(x) dx = \int f(u) du \)
For all functions \(f,g\) on a Riemann integrable interval \(I\), there exist some element \(c\) such that the integral on \(f(x)g(x)\) equals the integral on \(f(c)g(x)\), i.e the 'mean integral' caused by \(f\)
\( I = (a,b), f,g \text{ are Riemann integrable on }I \land \forall x, g(x) \geq 0 \implies \exists c \in I : \int^{b}_{a} f(x)g(x)dx=f(c)\int^{b}_{a} g(x)dx \)
If a function is discontinuous at \(a\), then it is an improper Riemann integral and providing that there is an existing limit, it can be found as
\( \int_{a}^{b} f(x)dx = \lim_{X \to a} \int_{X}^{b} f(x)dx \)
When an improper Riemann integral fails on some interval \((-\infty,\infty)\), a Cauchy principle value may return a valid answer
\( \text{pv}\int_{-\infty}^{\infty} f(x)dx = \lim_{X \to \infty} \int_{-X}^{X} f(x)dx \)
Again, prividing this exists
\(\int_{b}^{a} (\int_{d}^{c} f(x,y)dy) dx\)
\(\int_{b}^{a} (\int_{d}^{c} f(x,y)dy) dx =\int_{d}^{c} (\int_{b}^{a} f(x,y)dx) dy\)
\( ( \int_{b}^{a} f(x)g(x)dx )^{2} \leq ( \int_{b}^{a} f^2(x)dx )( \int_{b}^{a} g^2(x)dx ) \)
To find the length of an arc of a function, we can derive it from the priciple of the distance between two infinitesimal points.
\( L = \int_{a}^{b} \sqrt{1 + f'(x)^2} dx\)
Partition the function as \(\mathcal{P} = \{ x_0,x_1,...,x_n \}\)
Denote \( \Delta x = x_{i} - x_{i-1} , \Delta y = f(x_{i}) - f(x_{i-1})\)
\( L = \lim_{n \to \infty} \sum^{n}_{i=1} \sqrt{\Delta x^2 + \Delta y^2}\)
\( \exists \xi \in [ x_{i-1},x_i ] : f'(\xi)=\frac{\Delta y}{\Delta x} \)
\( L = \lim_{n \to \infty} \sum^{n}_{i=1} \sqrt{\Delta x^2 + (f'(\xi)\Delta x)^2}\)
\( L = \lim_{n \to \infty} \sum^{n}_{i=1} \sqrt{1 + f'(\xi)^2} \Delta x\)
\( L = \int_{a}^{b} \sqrt{1 + f'(x)^2} dx\)
A sequence \(f_n : \mathbb{N} \times X \to Y\) such that each term is a real function
Convergence such that considering a variable setting for the sequence of functions variable, a term in a sequence of functions can be chosen to be arbitrarily close to a function \(f\).
\( \lim_{n \to \infty} f_n = f \text{ pointwise } \iff \forall x (\forall \varepsilon (\exists N \in \mathbb{N} : n\geq N \implies |f_n(x) - f(x)| \lt \varepsilon ))\)
Convergence such that a term in a sequence of functions can be chosen to be arbitrarily close to function \(f\) without dependence on the sequence of functions variable
\( \lim_{n \to \infty} f_n = f \text{ uniformly } \iff \forall \varepsilon (\exists N \in \mathbb{N} : n\geq N \implies |f_n(x) - f(x)| \lt \varepsilon) \)
\( \lim_{n \to \infty} f_n = f \text{ uniformly } \iff \forall \varepsilon (\exists N \in \mathbb{N} : n\geq N \implies \sup_{x \in I} |f_n(x) - f(x)| \lt \varepsilon ))\)
Due to the assumed conditions, sufficient \(k,\delta\) can be chosen such that the following hold:
\( |x-y| \lt \delta \implies |f(x) - f(y)|\)
\(=|f(x) -f_{k}(x) -f_{k}(y) + f_{k}(x) + f_{k}(y) -f(y)| \) (term injection)
\( \lt |f(x) - f_{k}(x)| + |f_{k}(x) - f_{k}(y)| + |f_{k}(y) - f(y)|\) (triangle inequality)
\( \lt 3\frac{\varepsilon}{3} = \varepsilon \implies f \text{ is continuous} \)
Due to the assumed conditions, sufficient \(k,\delta\) can be chosen such that the following hold:
\( |x-y| \lt \delta \implies |f(x) - f(y)|\)
\(=|f(x) -f_{k}(x) -f_{k}(y) + f_{k}(x) + f_{k}(y) -f(y)| \)
\( \lt |f(x) - f_{k}(x)| + |f_{k}(x) - f_{k}(y)| + |f_{k}(y) - f(y)|\)
\( \lt 3\frac{\varepsilon}{3} = \varepsilon \implies f \text{ is continuous} \)
Uniform convergence test that permits comparison with a convergent series to prove uniform convergence (since convergent series lack parameters)
\( \sum^{\infty}_{n=1} M_n \lt \infty \implies \sum_{n=1}^{\infty} f_n(x) \text{ is uniformly convergent on X}\)
Consider \(S_n(x) = \sum_{k=1}^{n} f_k(x) \) and note the following:
\(| S_n(x) - S_m(x) | \lt \sum_{k=m+1}^{n} M_n \lt \varepsilon \implies S_n(x) \text{ is uniformly Cauchy} \implies S_n(x) \text{ is uniformly convergent}\)
Note that the property of uniformly Cauchy was implied due to absense of reliance on the parameter \(x\)
\( f_n \text{ is uniformly cauchy } \iff \forall \varepsilon ( \exists N \in \mathbb{N} : n,m\geq N \implies |f_n(x) - f_m(x)| \lt \varepsilon\)
\( f_n \text{ is uniformly cauchy } \iff \lim_{n \to \infty} f_n = f \text{ uniformly }\)
If a sequence of functions is pointwise convergent, uniformly bounded, and Riemann integrable, then limits and integrals can be swapped
\( \lim_{n \to \infty} f_n = f \text{ pointwise on } [a,b] \land (f_n)^{\infty}_{n=1} \text{ is uniformly bounded } \land f_n \text{ is Riemann integrable on }[a,b] \land f \text{ is integrable on }[a,b] \implies\)
\( \lim_{n \to \infty} \int^{b}_{a} f_n (x)dx = \int^{b}_{a} \lim_{n \to \infty} f_n (x)dx = \int^{b}_{a} f(x)dx \)
Theorem asserting that any continuous function on a closed and bounded interval can be uniformly approximated by some polynomial with arbitrarily small error
\( f \text{ is continuous on } [a,b] \implies \exists P : \sup_{x \in [a,b] }|f(x) - P(x)| \lt \varepsilon \)
\( X \text{ is a compact Hausdorff space} \land A \text{ is a subalgebra of } C(X,\mathbb{R}) \implies A \text{is dense in } C(X) \)
A class of function with a range of \( \{ 0,1\} \) that returns \(1\) iff the input belongs to some set \(S\)
\( \chi_{S}(x)= \begin{cases} 1 & x \in S \\ 0 & x \notin S \end{cases} \)
A characteristic function that returns 1 on a rational input and 0 on an irrational input. it is an example of a completely discontinuous function
\( \chi_{\mathbb{Q}}(x) = \begin{cases} 1 & x \in \mathbb{Q} \\ 0 & x \notin \mathbb{Q} \end{cases} = \lim_{n,k \to \infty} \cos(n!\pi x)^{2k} \)
\( \forall n, f_n \text{ is continuous and monotone on } [a,b] \land \lim_{n \to \infty} f_n = f \text{ pointwise} \land f\text{ is continuous } \implies \lim_{n \to \infty} f_n = f \text{ uniformly}\)
WLOG assume \(f_n\) is monotone decreasing, and \(\lim_{n \to \infty} f_n = 0\)
Define \(M_n = \sup \{f_n (x) :x \in [a,b] \}\) and note the proposition \(\lim_{n \to \infty} M_n = 0\) implies uniform convergence since if the supremum of some \(f_n\) can be chosen to be arbitrarily close to 0, the whole domain of \(f_n\) is even closer to 0, hence uniformly convergent.
By WOC assume \(\forall n, M_n \gt \varepsilon\), implying that \( \exists x_n : f_n (x_n) \gt \varepsilon\). \(\lim_{n \to \infty} f_n (x_{n_{k}}) = 0\) therefore Note the following properties:
Since \(f_n (L) \lt \frac{\varepsilon}{2}\), injecting this into the statement of continuity gives \( |f_n (x) - f_n (L) + f_n (L)| = |f_n (x)| \lt \varepsilon \), however this contradicts \( f_n (x_n) \gt \varepsilon \), therefore \(\forall n, M_n \lt \varepsilon\), and
\( \displaystyle \Gamma (z) = \int_{0}^{\infty} t^{z-1}e^{-t} dt \)
The fact that \(\Gamma (z+1) = z \Gamma (z) \) can be proved using integration by parts
\( \Gamma(z) = \int_{0}^{\infty} t^{z-1}e^{-t} dt = [-t^{z-1}e^{-t}]^{\infty}_{0} - \int_{0}^{\infty} -(z-1)t^{(z-1)-1}e^{-t} dt\)
\( = \int_{0}^{\infty} (z-1)t^{(z-1)-1}e^{-t} dt = (z-1) \Gamma (z-1)\)
\(\Gamma (n) = (n-1)! \) is a simple corrolary of this fundamental property since \( \Gamma(1) = 1 = 0!\), and so \(\Gamma(k)=(k-1)!\) and finally \(\Gamma(k+1)=k \Gamma (k)= k(k-1)!=k!\)
Euler's product form of the Gamma function can be derived by abusing the fact that \(\forall z, \lim_{n \to \infty} \frac{n!(n+1)^z}{(n+z)!} = 1\). We can then assume that \(\lim_{n \to \infty} \frac{n!(n+1)^z \Gamma(z)}{(n+z)!} = \Gamma (z)\)
\( \implies \lim_{n \to \infty} \frac{n!(n+1)^z (z-1)!}{(n+z)!} = \Gamma (z)\)
\( \implies \frac{1}{z}\lim_{n \to \infty} \frac{n!(n+1)^z z!}{(n+z)!} = \Gamma (z)\)
\( \implies \frac{1}{z}\lim_{n \to \infty} \frac{ (n (n-1) ... 1)( \frac{2}{1} \frac{3}{2} \frac{4}{3}\frac{5}{4} ... \frac{n+1}{n} )^z }{ ((z+1)(z+2)...(n+z))} = \Gamma (z)\)
\( \implies \Gamma (z) = \frac{1}{z} \prod^{\infty}_{n=1} [ \frac{1}{1+ \frac{z}{n}} (1+ \frac{1}{n})^z ] \)
\( \displaystyle B (a,b) = \int_{0}^{1} t^{a-1} (1-t)^{b-1} dt : a,b \gt 0\)
Integration technique by passing a differential operator under the integral for a non-integrated variable
\( \displaystyle \frac{d}{dt} (\int_{b}^{a} f(x,t) dx ) = \int_{a}^{b} \frac{\partial}{\partial t} f(x,t) dx \)
\( \displaystyle \frac{d}{dt} (\int_{u_2(t)}^{u_1(t)} f(x,t) dx) = f(u_2(t),t)u'_2(t) -f(u_1(t),t)u'_1(t) + \int_{u_2(t)}^{u_1(t)} \frac{\partial}{\partial t} f(x,t) dx\)
Theorem that permits swapping order of multivariable integrals, useful for integrating under integral sign
\( \displaystyle \int_{a}^{b} \int_{c}^{d} f(x,y) dydx =\int_{c}^{d} \int_{a}^{b}f(x,y)dxdy \)
Integration technique that replaces factors within an integral with a series
\( \displaystyle \int_{-\infty}^{\infty} e^{-x^2}dx = \sqrt{\pi} \)
\( I^2 = (\int_{-\infty}^{\infty} e^{-x^2}dx)^2 = (\int_{-\infty}^{\infty} e^{-x^2}dx)(\int_{-\infty}^{\infty} e^{-y^2}dy)\)
\( \int^{2\pi}_{0} \int_{0}^{\infty} e^{-r^2}r dr d\theta\)
\( 2\pi \int_{0}^{\infty} e^{-r^2}r dr \)
\( 2\pi \int_{0}^{\infty} e^{-r^2}r dr \)
\( 2\pi \int^{0}_{-\infty} \frac{1}{2} e^{u} du \)
\( \pi \int^{0}_{-\infty} e^{u} du \)
\( I^2 = \pi \)
\( I = \sqrt{\pi} \)
\( \int_{-\infty}^{\infty} \frac{\sin(x)}{x} dx = \pi \)