I remember that when I took linear algebra, I had learned determinants in a very “algorithmic” sort of way; a determinant to me was a function defined on square matrices by a particular recursive procedure. In Charles Cullen’s *Matrices and Linear Transformations*, however, he defines a determinant not by a rule, but by two properties which completely characterize the determinant. A determinant, according to Cullen, is an function which satisfies the following.

- For any two matrices and ,
- The determinant of is .

That’s it. That’s all you need to define the determinant. Of course, now we have to worry about whether a function with these properties even exists or not, and if so, is that function is unique? Before answering either of those questions, though, we need to establish that every matrix can be written as a product of elementary matrices (where by an *elementary matrix* we mean one which results in an elementary row (or column) operation when multiplied on the left (or right), *including* zeroing out a row or column). To see this, recall that for every matrix there exist non-singular matrices and such that

where . In the above, represents a sequence of elementary row operations, and gives a sequence of elementary column operations; and are the products of elementary matrices. Since these are non-singular we have

The middle matrix is clearly a product of matrices of the form . If we agree to also call such matrices elementary, then every square matrix is a product of elementary matrices.

Supposing we have a function satisfying the properties given above, we can calculate the determinant of an elementary matrix pretty easily. In the following, means we multiply the -th row by in the identity matrix; means we swap rows and ; means we add times row to row .

First we’ll look at scalar multiples of a particular row:

(these are scalars in a field, so they commute)

Using similar identities we can show and . Now we know the determinants for all elementary matrices. Since every square matrix is a product of elementary matrices, and we can split the determinant up across a product of matrices, we can even calculate the determinant of *any* square matrix based solely on the two properties given earlier. (Note this isn’t necessarily an efficient way to calculate the determinant, just a possible way.)

At this point it shouldn’t seem too surprising that the two properties above are all we need to define a determinant: every square matrix is a product of elementary matrices, and we can “massage” elementary matrices into a form whose determinant we can calculate. In coming posts we’ll see how to expand this to define the Laplace / cofactor expansion for the determinant; the relationship between determinants and inverses; and Cramer’s rule, which tells us how to compute a single coordinate in the solution vector to a system of equations.

You could also define the determinant of a map V->V as the induced (scalar) map on the highest exterior powers. Or, an alternating n-tensor whose value on the identity is 1 (n being the dimension of V). I like the former way.

Comment by Zygmund — June 16, 2009 @ 7:40 pm |

[…] Algebra, Linear Algebra — cjohnson @ 8:55 pm One easy consequence of our definition of determinant from last time is that any singular matrix must have determinant zero. Suppose is a singular […]

Pingback by Determinants Are Linear in Rows and Columns « Mathematics Prelims — June 16, 2009 @ 8:55 pm |