37363 - Introduction to Optimization


Maximum

Largest element in a function's image

\( c = \max_{u \in U} [f(u)] \iff \forall u \in U [ f(u) \leq c ]\)

Argument maximum

Domain element mapped to the largest element in a function's image

\( c = \text{arg max}_{u \in U} [f(u)] \iff \forall u \in U [ f(u) \leq f(c) ]\)

Optimization problem

Mathematical function to be minimized or maximized, possibly subject to constraints

Types of optimization problems

Feasibility region

Set of arguments that fit all constraints

Linear Programming

Graphical method

Inequality method

One can use arithmetic of inequalities for simple LP problems

Standard form

Slack and surplus variables

Variable used to create an equality from an inequality (used to put an optimization problem into standard form)

\(ax+by \geq c \iff \exists s \in [0,\infty] ( ax+by-s = c ) \)

\(ax+by \leq c \iff \exists s\in [0,\infty] ( ax+by+s = c ) \)

To pur optimization problems with unrestricted sign of DVs, consider that the difference between two nonnegative reals is unrestricted in sign

\(x \in \mathbb{R} \iff \exists p,q \in [0,\infty] (x=p-q) \)

Convex set

Connected set that contains all line segments between its points

\(S \text{ is convex } \iff \forall \mathbf{x},\mathbf{y} \in S ( (1-t)\mathbf{x}+t\mathbf{y} \in S, t \in [0,1] ) \)

Properties

\( U \text{ is a closed half space } \implies U \text{ is convex} \)

\(E,F \text{ are convex } \implies E \cap F \text{ is convex } \)

Let \(\)

Closed half space

Partition of a vector space by a plane

\(U \text{ is a closed half space } \iff \forall \mathbf{x} \in U ( \mathbf{a} \mathbf{x} \leq b)\)

Extreme point

Point in a convex set such that no segment contains it unless it is an endpoint of the segment

Basic feasible solution (BFS)

For a LP problem in standard form with constraints \(\mathbf{A}\mathbf{x}=\mathbf{b}\). Let \(\mathbf{A}_B\) be the submatrix of \(\mathbf{A}\) whose columns form a minimum basis of \(\mathrm{col}(\mathbf{A}) \) and \(\mathbf{x}_{B}\) a vector with the variables whose coefficients in the constraints correspond to basis elements.

Adjacent extreme point

Given \(m\) constraints, two extreme points are adjacent if their respective BFSs share \(m-1\) DVs

Ratio test

Test to determine the leaving variable in simplex method, i.e the variable that limits the power of the new basis element the most leaves

\( i = \argmin \{ \frac{\hat{\mathbf{b}}_i}{\hat{\mathbf{A}}_{ij}} : \hat{\mathbf{A}}_{ij} \gt 0 \} \)

Simplex method

Greedy algorithm that enumerates extreme points, moving from \(\mathbf{0}\) to an adjacent extreme point until the the optimal extreme point is found

Adjacent extreme point

Given \(m\) constraints, two extreme points are adjacent if their respective BFSs share \(m-1\) DVs

Artificial variables

Variables appended to constraints in modified simplex algorithms.

Big M method

Two phase simplex method

Forms

Canonical form

Normal form

Dual LP

For any minimization LP in normal form \(P=(\\mathbf{c},\mathbf{A},\mathbf{b},\min)\), the maximization LP in normal form \(D=(\mathbf{b},\mathbf{A}^{\intercal},\mathbf{c},\max)\) is its dual LP, where \(P\) is called the primal LP

Symmetric Dual Problem

For an LP in normal form, finding its dual is done by the above formulation

Asymmetric Dual Problem

For an LP not in normal form, one translates the state of primal constraints to some state of dual DVs and state of primal DVs to state of dual constraints accordin to these rules.

\(Normal form constraint \iff nonnegative DV\)

\(Reversed form constraint \iff nonpositive DV\)

\(equality constraint \iff unrestricted sign DV\)

These rules can be deduced by making the primal normal form, finding its dual, and bringing the dual back out of normal form

Duality lemma

Dual of a dual is the primal

Weak Duality theorem (WDT)

Theorem asserting that the minimization primal OF of a feasible element of a primal problem is bounded below by the dual OF of any feasible element of its dual problem

\( \forall \mathbf{x}\in P,\mathbf{y} \in D [ \mathbf{c}^{\intercal}\mathbf{x} \geq \mathbf{b}^{\intercal}\mathbf{y} ]\)

Strong duality theorem (SDT)

Theorem asserting that the solution of a primal problem equals the solution to its dual problem.

Complementary slackness theorem (CST)

Corollary of SDT that derives a set of 2 equations that the optimal solutions of a prime and its dual follow

Theorem asserting that positive variables of a primal solution imply equality constraint for duap solution and that slack variables of the primal equal zero in the dual

Dual simplex method

Starts with optimal but infeasible solution, updated until optimap solution is feasible

Variant of simplex method which allows infeasibility of primal but requires feasibility of dual. One chooses leaving variable based on \(\hat{\mathbf{b}}\) and performs ratio test to determine entering variable on \(\hat{\mathbf{c}}^{\intercal}\) Once solution is feaible, coue with regular simplex method

Sensitivity analysis

Sensitivity analysis refers to techniques that provide bounds for how much an optimization problem's parameters may be changed while still retaining the same optimal solutions or feasible regions

Sensitivity analysis of OF

When the reduced cost in the final tableau is all negative (in min case), solution is optimal. The idea is to see how much can be added until a reduced cost component becomes positive (again, for min)

\(\mathbf{c}_{\mathbf{N}}'^{\intercal}= \mathbf{c}_{\mathbf{N}}^{\intercal} + (\Delta\mathbf{c}_{\mathbf{N}})^{\intercal} \)

\(\mathbf{c}_{\mathbf{B}}'^{\intercal}= \mathbf{c}_{\mathbf{B}}^{\intercal} + (\Delta\mathbf{c}_{\mathbf{B}})^{\intercal} \)

Nonbasic

Note; these are for min; flip inequality for max

\( (\Delta\mathbf{c}_{\mathbf{N}})^{\intercal} - \hat{\mathbf{c}_{\mathbf{N}}} \geq \mathbf{0} \implies \text{ doesn't change optimal solution}\)

Basic

\( (\Delta\mathbf{c}_{\mathbf{B}})^{\intercal} \mathbf{B}^{-1}\mathbf{N} + \hat{\mathbf{c}_{\mathbf{N}}}^{\intercal} \leq \mathbf{0} \implies \text{ doesn't change optimal solution}\)

Sensitivity analysis of RHS

When reduced RHS in the final tableau is all positive, BFS is feasible. The idea is to see how much can be added until a RHS component becomes negative

\(\mathbf{b}' = \mathbf{b}+ \Delta\mathbf{b} \)

\( \mathbf{B}^{-1}(\mathbf{b}+\Delta\mathbf{b}) \geq \mathbf{0} \implies \text{ doesn't change feasible region}\)

Sensitivity analysis of A

Nonbasis

for change in a column of A associated with a nonbasis elwment in the final tabelau, cbt DAj geq -hatcj

\( \mathbf{A}'_{j} = \mathbf{A}_{j} +\Delta \mathbf{A}_j \)

\( \hat{\mathbf{c}}^{\intercal}_{j} + \mathbf{c}_{\mathbf{B}}^{\intercal} \mathbf{B}^{-1} ( \Delta\mathbf{A}_j ) \geq 0\)

Sensitivity analysis of constraint appending

check that optimal solution satisfies new constraint, if so optimal solution unchanged. if not, use row reductions to get in canonical form and add to tableau as a basis element. note that it may be infeasible; in thos case continue using dual simplex method

Sensitivity analysis of variable appending

if ctb binv Aj - cj leq 0 for min, it can safely be entered into nonbasis without changing optimal solution, otherwise entee column Binv Aj into final tabealu and ctb Binv Aj -cj into reduced cost and continue simplex method

\( \hat{\mathbf{c}_{\mathbf{B}}}^{\interal} \mathbf{B}^{-1} \mathbf{A}_{j} - \mathbf{c}^{\intercal}_{j} \leq 0 \implies \text{ nonbasis entry doesn't change optimal solution}\)

Nonlinear Programming

Integer Programming