Largest element in a function's image
\( c = \max_{u \in U} [f(u)] \iff \forall u \in U [ f(u) \leq c ]\)
Domain element mapped to the largest element in a function's image
\( c = \text{arg max}_{u \in U} [f(u)] \iff \forall u \in U [ f(u) \leq f(c) ]\)
Mathematical function to be minimized or maximized, possibly subject to constraints
Set of arguments that fit all constraints
One can use arithmetic of inequalities for simple LP problems
Variable used to create an equality from an inequality (used to put an optimization problem into standard form)
\(ax+by \geq c \iff \exists s \in [0,\infty] ( ax+by-s = c ) \)
\(ax+by \leq c \iff \exists s\in [0,\infty] ( ax+by+s = c ) \)
To pur optimization problems with unrestricted sign of DVs, consider that the difference between two nonnegative reals is unrestricted in sign
\(x \in \mathbb{R} \iff \exists p,q \in [0,\infty] (x=p-q) \)
Connected set that contains all line segments between its points
\(S \text{ is convex } \iff \forall \mathbf{x},\mathbf{y} \in S ( (1-t)\mathbf{x}+t\mathbf{y} \in S, t \in [0,1] ) \)
\( U \text{ is a closed half space } \implies U \text{ is convex} \)
\(E,F \text{ are convex } \implies E \cap F \text{ is convex } \)
Let \(\)
Partition of a vector space by a plane
\(U \text{ is a closed half space } \iff \forall \mathbf{x} \in U ( \mathbf{a} \mathbf{x} \leq b)\)
Point in a convex set such that no segment contains it unless it is an endpoint of the segment
For a LP problem in standard form with constraints \(\mathbf{A}\mathbf{x}=\mathbf{b}\). Let \(\mathbf{A}_B\) be the submatrix of \(\mathbf{A}\) whose columns form a minimum basis of \(\mathrm{col}(\mathbf{A}) \) and \(\mathbf{x}_{B}\) a vector with the variables whose coefficients in the constraints correspond to basis elements.
Given \(m\) constraints, two extreme points are adjacent if their respective BFSs share \(m-1\) DVs
Test to determine the leaving variable in simplex method, i.e the variable that limits the power of the new basis element the most leaves
\( i = \argmin \{ \frac{\hat{\mathbf{b}}_i}{\hat{\mathbf{A}}_{ij}} : \hat{\mathbf{A}}_{ij} \gt 0 \} \)
Greedy algorithm that enumerates extreme points, moving from \(\mathbf{0}\) to an adjacent extreme point until the the optimal extreme point is found
Given \(m\) constraints, two extreme points are adjacent if their respective BFSs share \(m-1\) DVs
Variables appended to constraints in modified simplex algorithms.
For any minimization LP in normal form \(P=(\\mathbf{c},\mathbf{A},\mathbf{b},\min)\), the maximization LP in normal form \(D=(\mathbf{b},\mathbf{A}^{\intercal},\mathbf{c},\max)\) is its dual LP, where \(P\) is called the primal LP
For an LP in normal form, finding its dual is done by the above formulation
For an LP not in normal form, one translates the state of primal constraints to some state of dual DVs and state of primal DVs to state of dual constraints accordin to these rules.
\(Normal form constraint \iff nonnegative DV\)
\(Reversed form constraint \iff nonpositive DV\)
\(equality constraint \iff unrestricted sign DV\)
These rules can be deduced by making the primal normal form, finding its dual, and bringing the dual back out of normal form
Dual of a dual is the primal
Theorem asserting that the minimization primal OF of a feasible element of a primal problem is bounded below by the dual OF of any feasible element of its dual problem
\( \forall \mathbf{x}\in P,\mathbf{y} \in D [ \mathbf{c}^{\intercal}\mathbf{x} \geq \mathbf{b}^{\intercal}\mathbf{y} ]\)Theorem asserting that the solution of a primal problem equals the solution to its dual problem.
Corollary of SDT that derives a set of 2 equations that the optimal solutions of a prime and its dual follow
Theorem asserting that positive variables of a primal solution imply equality constraint for duap solution and that slack variables of the primal equal zero in the dual
Starts with optimal but infeasible solution, updated until optimap solution is feasible
Variant of simplex method which allows infeasibility of primal but requires feasibility of dual. One chooses leaving variable based on \(\hat{\mathbf{b}}\) and performs ratio test to determine entering variable on \(\hat{\mathbf{c}}^{\intercal}\) Once solution is feaible, coue with regular simplex method
Sensitivity analysis refers to techniques that provide bounds for how much an optimization problem's parameters may be changed while still retaining the same optimal solutions or feasible regions
When the reduced cost in the final tableau is all negative (in min case), solution is optimal. The idea is to see how much can be added until a reduced cost component becomes positive (again, for min)
\(\mathbf{c}_{\mathbf{N}}'^{\intercal}= \mathbf{c}_{\mathbf{N}}^{\intercal} + (\Delta\mathbf{c}_{\mathbf{N}})^{\intercal} \)
\(\mathbf{c}_{\mathbf{B}}'^{\intercal}= \mathbf{c}_{\mathbf{B}}^{\intercal} + (\Delta\mathbf{c}_{\mathbf{B}})^{\intercal} \)
Note; these are for min; flip inequality for max
\( (\Delta\mathbf{c}_{\mathbf{N}})^{\intercal} - \hat{\mathbf{c}_{\mathbf{N}}} \geq \mathbf{0} \implies \text{ doesn't change optimal solution}\)
\( (\Delta\mathbf{c}_{\mathbf{B}})^{\intercal} \mathbf{B}^{-1}\mathbf{N} + \hat{\mathbf{c}_{\mathbf{N}}}^{\intercal} \leq \mathbf{0} \implies \text{ doesn't change optimal solution}\)
When reduced RHS in the final tableau is all positive, BFS is feasible. The idea is to see how much can be added until a RHS component becomes negative
\(\mathbf{b}' = \mathbf{b}+ \Delta\mathbf{b} \)
\( \mathbf{B}^{-1}(\mathbf{b}+\Delta\mathbf{b}) \geq \mathbf{0} \implies \text{ doesn't change feasible region}\)
for change in a column of A associated with a nonbasis elwment in the final tabelau, cbt DAj geq -hatcj
\( \mathbf{A}'_{j} = \mathbf{A}_{j} +\Delta \mathbf{A}_j \)
\( \hat{\mathbf{c}}^{\intercal}_{j} + \mathbf{c}_{\mathbf{B}}^{\intercal} \mathbf{B}^{-1} ( \Delta\mathbf{A}_j ) \geq 0\)
check that optimal solution satisfies new constraint, if so optimal solution unchanged. if not, use row reductions to get in canonical form and add to tableau as a basis element. note that it may be infeasible; in thos case continue using dual simplex method
if ctb binv Aj - cj leq 0 for min, it can safely be entered into nonbasis without changing optimal solution, otherwise entee column Binv Aj into final tabealu and ctb Binv Aj -cj into reduced cost and continue simplex method
\( \hat{\mathbf{c}_{\mathbf{B}}}^{\interal} \mathbf{B}^{-1} \mathbf{A}_{j} - \mathbf{c}^{\intercal}_{j} \leq 0 \implies \text{ nonbasis entry doesn't change optimal solution}\)