Recall from last time that a linear program is an optimization problem of the form
Subject to and
Where the system of constraints is just a convenient way to write something of the form
Notice that our use of maximization, instead of minimization, is immaterial as any maximization problem can be turned into an equivalent minimization problem and vice versa. The problem of maximizing is equivalent (has precisely the same solutions) as the problem of minimizing .
Likewise, our use of inequalities instead of is more a matter of taste and convenience than necessity: is the same thing as . What we would like to do, though, is get rid of those inequalities altogheter. If we can turn these constraints into a system of equations of the form , then we’ll be able to apply techniques from linear algebra to help us manipulate the LP (linear program). Later we’ll see how to cleverly use elementary row operations to traverse the set of potential optimal solutions. But first we have to lose those nasty little inequalities.
So here’s the idea: say we have . We can turn this into an equation by inserting a non-negative variable, , which equals . That is, takes up the slack left by , and becomes . Algebraically, if we set , we have and that’s why we have equality.
If our inequality goes the other way, say , we’ll just subtract off the surplus: set (note that this means ) and so becomes . So in the system of inequalities above we can get a system of equations by tacking on a slack, or taking out the surplus, for each inequality. The earlier system then becomes
Where is the slack (surplus) we add (subtract) for the -th constraint. Now our matrices will of course have to change a little bit to take these new variables into account. Originally, was an matrix, was a -vector, and was a -vector. For our system of equations we’ll let and where is the identity matrix; note that doesn’t change. Note too that we append the identity because we have only slack variables; if some of those had been surplus variables, the corresponding entries we append would be negative.
So by throwing in an extra variable for each inequality constraint we have a system of equations which is amenable to the techniques of linear algebra. If it were to happen that the resultant matrix was invertible (which requires it to be square — an equal number of equations and unknowns) then we could just multiply by the inverse and find the solution. This doesn’t work for our general problem though since there will normally be infinitely many possible solutions. Our goal will be to weed out the best solution from those possibilities. In the next post we’ll look at this space of possible solutions.