In a first semester calculus class you learn how to optimize a differentiable function by looking at what happens where the derivative is zero (and the endpoints if you’re concerned with a closed interval). The idea being what comes up must come down: if the function goes up and then goes down, a maximum must occur somewhere between (and similarly for a minimization problem), so you look for the precise point that “up” turns to “down.” What I want to talk about in the next few posts is how to maximize (or minimize) a linear function in the presence of linear constraints. (Note that as it’s taught in calculus, such optimization problems have no additional constraints.)

That is, we’ll look at functions of the form where and are vectors in . The is the dot product of these two vectors, and is just a convenient way of writing . So the are our variables and the are constants normally referred to as *cost coefficients*.

An example of such a function is the cost of, say, building a house. Let’s suppose is the amount of lumber used which costs $2 per pound (I have no idea how much lumber costs, so you’ll have to excuse me if that’s a completely ridiculous value); is the amount of brick which we’ll assume is $3 per pound; and let’s say is the amount of roofing we use and it costs $5 per pound. Here our cost for building the house would be , and so we could just say and . This function, , we will refer to as our *objective function*, and our goal will be to make this as small (or large) as possible given certain restrictions (linear constraints).

Now, by a *linear constraint* we mean that there are limits on the resources we use, and we can express these limits in the same way we expressed the cost. In our house building example, for instance, we may not be able to use any more than 2000lb of brick, so we’d have a constraint . Similarly let’s assume we must use less than 1500lb of lumber and 300lb of roofing materials: , and . If we further assume that our house must weigh at least 2100lb (for whatever reason) then we’d have the constraint . Just to make things more complicated, let’s assume we have to use at least 50lb of roofing materials: . (I know this is a goofy example with silly constraints, but it still serves the purpose of showing us what a linear programming problem is. The last two constraints are there only so we don’t have a trivial solution.)

Now our *linear program* is simply the following:

Minimize

Subject to the following:

This last constraint simply says that we can’t have a negative amount of brick, lumber, or roof.

So, our problem is that we want to minimize the cost of building a house when we’re faced with certain restrictions on amounts of material we can use. (Now you should see why the last two constraints are needed to prevent a trivial solution: if you want to minimize the cost and there’s nothing telling you can’t, just use zero brick, zero lumber, and zero roofing!)

It is generally convenient to rewrite this set of constraints as a matrix. In our particular problem we could set up the matrix

Then if we let , our constraints could be expressed simply as

Where we will adopt the convention that if and are vectors of the same size, means that each component of is less than or equal to the corresponding component of ; for all . (Notice that in order to write our problem like this we had to multiply the last two inequalities by -1 so that we’d have instead of the we started with.)

This is all linear programming means: maximize (or minimize) the function under the condition that and . The term *linear programming*, by the way, is due to George Dantzig who studied such problems while trying to optimize schedules for the US Air Force in the 1940s. The Air Force referred to their schedules as “programs,” and so such problems came to be known as *linear programs*.

In the next post we’ll discuss *slack* and *surplus* variables which allow us to turn the inequalities above into equalities, which will be necessary for us to solve linear programs algorithmically.

[…] Slack and Surplus Filed under: Linear Programming — cjohnson @ 6:57 pm Recall from last time that a linear program is an optimization problem of the […]

Pingback by Linear Programming, Part II – Slack and Surplus « Mathematics Prelims — June 4, 2009 @ 6:57 pm |