Mathematics Prelims

June 3, 2009

An Introduction to Linear Programming, Part I

Filed under: Linear Programming — cjohnson @ 8:42 pm

In a first semester calculus class you learn how to optimize a differentiable function by looking at what happens where the derivative is zero (and the endpoints if you’re concerned with a closed interval).  The idea being what comes up must come down: if the function goes up and then goes down, a maximum must occur somewhere between (and similarly for a minimization problem), so you look for the precise point that “up” turns to “down.”  What I want to talk about in the next few posts is how to maximize (or minimize) a linear function in the presence of linear constraints.  (Note that as it’s taught in calculus, such optimization problems have no additional constraints.)

That is, we’ll look at functions of the form c^T x where c and x are vectors in \mathbb{R}^n.  The c^T x is the dot product of these two vectors, and is just a convenient way of writing c_1 x_1 + c_2 x_2 + ... + c_n x_n.  So the x_i are our variables and the c_i are constants normally referred to as cost coefficients.

An example of such a function is the cost of, say, building a house.  Let’s suppose x_1 is the amount of lumber used which costs $2 per pound (I have no idea how much lumber costs, so you’ll have to excuse me if that’s a completely ridiculous value); x_2 is the amount of brick which we’ll assume is $3 per pound; and let’s say x_3 is the amount of roofing we use and it costs $5 per pound.  Here our cost for building the house would be 2x_1 + 3x_2 + 5x_3, and so we could just say c = [2,\, 3,\,5]^T and x = [x_1,\, x_2,\, x_3]^T.  This function, c^T x, we will refer to as our objective function, and our goal will be to make this as small (or large) as possible given certain restrictions (linear constraints).

Now, by a linear constraint we mean that there are limits on the resources we use, and we can express these limits in the same way we expressed the cost.  In our house building example, for instance, we may not be able to use any more than 2000lb of brick, so we’d have a constraint x_2 \leq 2000.  Similarly let’s assume we must use less than 1500lb of lumber and 300lb of roofing materials: x_1 \leq 1500, and x_3 \leq 300.  If we further assume that our house must weigh at least 2100lb (for whatever reason) then we’d have the constraint x_1 + x_2 + x_3 \geq 2100.  Just to make things more complicated, let’s assume we have to use at least 50lb of roofing materials: x_3 \geq 50.  (I know this is a goofy example with silly constraints, but it still serves the purpose of showing us what a linear programming problem is.  The last two constraints are there only so we don’t have a trivial solution.)

Now our linear program is simply the following:

Minimize 2x_1 + 3x_2 + 5x_5

Subject to the following:

x_1 \leq 1500

x_2 \leq 2000

x_3 \leq 300

x_1 + x_2 + x_3 \geq 2100

x_3 \geq 50

x_1, x_2, x_3 \geq 0

This last constraint simply says that we can’t have a negative amount of brick, lumber, or roof.

So, our problem is that we want to minimize the cost of building a house when we’re faced with certain restrictions on amounts of material we can use.  (Now you should see why the last two constraints are needed to prevent a trivial solution: if you want to minimize the cost and there’s nothing telling you can’t, just use zero brick, zero lumber, and zero roofing!)

It is generally convenient to rewrite this set of constraints as a matrix.  In our particular problem we could set up the matrix

A = \left[ \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ -1 & -1 & -1 \\ 0 & 0 & -1 \end{array} \right]

Then if we let b = [1500,\, 2000,\, 300,\, -2100,\, -50], our constraints could be expressed simply as

Ax \leq b

x\geq 0

Where we will adopt the convention that if \alpha and \beta are vectors of the same size, \alpha \leq \beta means that each component of \alpha is less than or equal to the corresponding component of \beta; \alpha_i \leq \beta_i for all i.  (Notice that in order to write our problem like this we had to multiply the last two inequalities by -1 so that we’d have \leq instead of the \geq we started with.)

This is all linear programming means: maximize (or minimize) the function c^T x under the condition that Ax \leq b and x \geq 0.  The term linear programming, by the way, is due to George Dantzig who studied such problems while trying to optimize schedules for the US Air Force in the 1940s.  The Air Force referred to their schedules as “programs,” and so such problems came to be known as linear programs.

In the next post we’ll discuss slack and surplus variables which allow us to turn the inequalities above into equalities, which will be necessary for us to solve linear programs algorithmically.

Advertisements

1 Comment »

  1. […] Slack and Surplus Filed under: Linear Programming — cjohnson @ 6:57 pm Recall from last time that a linear program is an optimization problem of the […]

    Pingback by Linear Programming, Part II – Slack and Surplus « Mathematics Prelims — June 4, 2009 @ 6:57 pm | Reply


RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Blog at WordPress.com.

%d bloggers like this: