Mathematics Prelims

June 5, 2009

LP Part 3: The Feasible Region

Filed under: Linear Programming — cjohnson @ 3:18 pm

Suppose we want to consider all of the points in the plane that satisfy the inequality y \leq x + 1.  What does such a set look like? Well, one way to visualize it is to draw the line y = x + 1 and then take everything under or on the line.  This is an example of a half-space: a “line” that cuts the space in two, and then we take everything on one side of that “line.”  I write “line” because in higher dimensions this isn’t actually a line.  In three dimensions, for instance, it’s a plane.  In four dimensions it’s a hyperplane.  We’ll use the word hyperplane to just mean a linear equality regardless of dimension.  Recall that a linear equality is simply an equation of the form c_1 x_1 + c_2 x_2 + ... + c_n x_n = b; no exponents, no logs or square roots, just constant multiples added up.  And so a half-space is everything on one side of a hyperplane.

In a linear program we tend to have a few of these linear inequalities  in the form of our constraints.  When we actually write out Ax \leq b, we see that each row in A corresponds to a particular inequality and so each constraint defines a half-space.  We’re only concerned with points that satisfy all of the constraints, however, and thus want our optimal solution to occur within the intersection of these half-spaces.  This intersection of all the half-spaces defined by our constraints is called the feasible region of the program.

As far as the feasible region is concerned, there are four cases we need to be aware of when it comes time to actually solve a linear program: the feasible region is empty; the feasible region is not empty, but is bounded; the region is not bounded, but the objective function is;  neither the feasible region nor the objection function are bounded.

The first case occurs when our constraints “cancel each other out.”  For instance if we had the constraints 3x_1 + 2x_2 - 5x_3 \geq 3 and 3x_1 + 2x_2 + 5x_3 \leq 2.  The first constraint puts us on one half of a hyperplane, and then the second tries to get us back on the other side.  Since both of these inequalities can’t hold at the same time, there are no points inside our feasible region.  That’s a simple example of an infeasible LP.  Another, slightly more complicated example occurs if our constraints are

\left[ \begin{array}{cc} 1 & -1 \\ -1 & -1 \\ 0 & 1 \end{array} \right] \left[ \begin{array}{c} x_1 \\ x_2 \end{array} \right] \geq \left[ \begin{array}{c} -1 \\ -2 \\ 2 \end{array}\right]

Multiplying this out, we have the following.

An Infeasible Region

An Infeasible Region

In the above picture, areas that satisfy an inequality are shaded in blue, and areas satisfying two equalities are in a dark blue (the shading is additive).  Notice that no area is in a blue any darker than the areas satisfying two inequalities; no area satisfies all three inequalities.  Since no point satisfies all three inequalities, our feasible region is empty.

We’ll say the feasible region is bounded if we can place a circle of finite radius around it, encompassing the entire feasible region.  If the feasible region is bounded, then our objective must be bounded.  As we will see later, our objective is minimized or maximized only at “corners” of the feasible region.  Such a corner occurs only when some number of our constraints are satisfied by equality (the particular number depending on the dimension of the problem).  In two dimensions, for instance, corners occur when two lines cross, and we’re only on a line if we satisfy a constraint with equality.  In three dimensions we’d need three planes to intersect.  Since we only concern ourselves with problems which have a finite number of constraints, we only have a finite number of these corners (this number may be huge, though, so just enumerating all possible corners isn’t a reasonable strategy).

If our feasible region is unbounded, however, our objective function may or may not be bounded.  It could be that we can make the objective arbitrarily large by picking numbers that are farther and farther out, but it could also be that picking huge numbers isn’t particularly helpful.  For example, suppose our only restrictions were x_1, x_2 \geq 0, so we can pick any pair of points in the first quadrant.  If our goal is to maximize x_1 + x_2, then we can make this value as big as we want by picking big enough numbers.  If our goal is to minimize x_1 + x_2, however, then the best we can do is to pick x_1 = x_2 = 0.

For a slightly less trivial example, suppose we want to maximize x_2 - x_1 subject to the constraints x_1 \geq 0, x_2 \geq 0, x_2 \leq 1.  Now our feasible region is an infinite strip, a portion of which is given below.

Unbounded Feasible Region, Bounded Objective

Despite being able to pick very large numbers, our maximum occurs at the point x_1 = 0, x_2 = 2.  If we were to increase x_1 any our value would drop since we’re subtracting x_1, and we’ve already made x_2 as large as we can.  So we have a bounded objective function, even though our feasible region is unbounded.  We will generally refer to a linear program as being bounded if it’s objective function is bounded, and unbounded if it’s objective function is unbounded.  Note that for the LP to be unbounded, the feasible region must be unbounded.  If the LP is bounded, however, the feasible region may be either bounded or unbounded.

In the next post we’ll look at the extreme points of the feasible region, which are the “corners” alluded to earlier, and give an argument for why the maximum/minimum must occur at an extreme point.

Leave a Comment »

No comments yet.

RSS feed for comments on this post. TrackBack URI

Leave a comment

Create a free website or blog at WordPress.com.