Recall from earlier that in a linear program we have a set of constraints, , each of which defines a half-space. The intersection of all these half-spaces, along with our standing assumption defines our feasible region: the region in space where satisfying all of our constraints. In general, the intersection of a finite number of half-spaces is called a *polyhedral set*, and if the set is bounded set call it a *polytope*.

One thing to notice about polyhedral sets is that they’re convex. To see this, suppose that our polyhedral set is defined by . Now let and be two points satisfying these constraints (in each of the corresponding half-spaces). Let . We have the following.

So and the set is convex.

Now, if we had only a few (two or three) variables, then we could explicitly graph the polyhedral set and we’d notice that it has some sharp “corners” when two lines (or planes) that form the boundary of a half-space intersect. These points are of interest to us because if an optimal solution exists it must occur at one of these corner points.

As an example, consider the following linear program:

Maximize

Subject to the following:

Graphing these we have

The extreme points for our feasible region are the five corners above, each of which is the intersection of two of the lines determined by our constraints. Now let’s consider all of the points where our objective function would be equal to 2. These are the points satisfying the equation — all such points lie on a line, and in particular are the points on the red line below.

Note that if we were to consider the points where our objective function is 3 (the points lying on the line ), we’d have a similar picture, but with our line simply shifted up one unit:

Remember that our goal is to maximize our objective function while satisfying all of the constraints / staying within the feasible region. Geometrically, then, what we should do is move this line up as far as we can — as our objective increases when we shift the line up — and stay in the feasible region. Each of these lines is called an *objective contour*. Looking at the set of all possible values the objective function could take on we have something similar to the following (I say “something similar” because the picture would just be solid red if we looked at all the contours).

From this picture we see that the optimal solution occurs at the top-right extreme point. Since an extreme point is the intersection of two lines, we can simply calculate the intersection point of those two lines to see to find the precise point where optimality is obtained (these are called *binding constraints* since they’re satisfied by equality).

Those lines are and , so optimality occurs at and our objective function takes on the value 4.

This geometric argument is nice and intuitive, but falls apart if we have more than three variables (since we’re in at least four dimensional space where such pictures can’t be drawn). In the next post we’ll see how to characterize the extreme points of the feasible region algebraically. Later we’ll see how to traverse the extreme points of the feasible region, using that algebraic description, in such a way that we always move to a “closer to optimal” extreme point, until we finally get there (or determine no such optimal point exists).

## Leave a Reply