Let be finite sets of indices.
Let for
and let for
Finally let be a cost vector.
A linear program is an optimisation problem of the form:

” Minimize over subject to:
for
for
for
for
for

i.e. we are minimizing the linear function
over
for some that’s defined by some linear constraints.

There are other equivalent forms they can take:
Forms of linear programs

Lemma

Suppose is optimal for an LP.
Then is a Basic Feasible Solution.

Proof

Maximum of a Convex Function is always at an Extreme Point (of domain).
So minimum of a concave function is an Extreme Point.
Note that linear programs are both convex and concave.
So we need to find all Extreme Points and evaluate the function there.
We are done by Theorem.

Theorem

Suppose the set of feasible values is nonempty,
i.e. there is some satisfying the constraints.
Then there exists an algorithm that finds the optimal solution(s) to the linear problem,
and this algorithm always terminates.

Proof

Consider a Linear program in the standard form:
” Minimize over subject to and
where , and .

We may assume that rows of are Linearly Independent
(due to existence of such that ,
we can just remove some rows without changing the problem)

For a set we denote by the submatrix of
obtained by taking -th columns of for .

Now onto the algorithm:

  1. Let with (we have choices)
  2. If is not invertible, choose the next ,
    otherwise set and for
  3. Then is a Basic Solution to
  4. If , then is a Basic Feasible Solution
    and we note down with the associated cost
  5. Go back to step 1. until we have tried all possible ,
    otherwise proceed to
  6. We filter out the minimums from our noted down values.

Note that we have necessarily gone through all Basic Solutions
because each of them has some Basis.