Mathematics Prelims

June 13, 2009

Determinants

Filed under: Algebra,Linear Algebra — cjohnson @ 9:10 pm

I remember that when I took linear algebra, I had learned determinants in a very “algorithmic” sort of way; a determinant to me was a function defined on square matrices by a particular recursive procedure.  In Charles Cullen’s Matrices and Linear Transformations, however, he defines a determinant not by a rule, but by two properties which completely characterize the determinant.  A determinant, according to Cullen, is an \mathcal{F}_{n \times n} \to \mathcal{F} function which satisfies the following.

  1. For any two n \times n matrices A and B, \det(AB) = \det(A) \det(B)
  2. The determinant of \text{diag}(k, 1, 1, ..., 1) is k.

That’s it.  That’s all you need to define the determinant.  Of course, now we have to worry about whether a function with these properties even exists or not, and if so, is that function is unique?  Before answering either of those questions, though, we need to establish that every n \times n matrix can be written as a product of elementary matrices (where by an elementary matrix we mean one which results in an elementary row (or column) operation when multiplied on the left (or right), including zeroing out a row or column).  To see this, recall that for every matrix A there exist non-singular matrices P and Q such that

\displaystyle PAQ = \left[ \begin{array}{cc} I_r & 0 \\ 0 & 0 \end{array} \right]

where r = \text{rank}(A).  In the above, P represents a sequence of elementary row operations, and Q gives a sequence of elementary column operations; P and Q are the products of elementary matrices.  Since these are non-singular we have

\displaystyle A = P^{-1} \left[ \begin{array}{cc} I_r & 0 \\ 0 & 0 \end{array} \right] Q^{-1}

The middle matrix is clearly a product of matrices of the form \text{diag}(1, ..., 1, 0, 1, ..., 1).  If we agree to also call such matrices elementary, then every square matrix is a product of elementary matrices.

Supposing we have a \det function satisfying the properties given above, we can calculate the determinant of an elementary matrix pretty easily.  In the following, I_{(kR_i)} means we multiply the i-th row by k in the identity matrix; I_{(R_i \leftrightarrow R_j)} means we swap rows i and j; I_{(kR_i + R_j)} means we add k times row i to row j.

First we’ll look at scalar multiples of a particular row:

\displaystyle \det\left( I_{(kR_i)} \right)

\displaystyle \, = \det\left( I_{(R_i \leftrightarrow R_1)} \, I_{(kR_1)} \, I_{(R_i \leftrightarrow R_j)} \right)

\displaystyle \, = \det \left( I_{(R_i \leftrightarrow R_1)} \right) \, \det \left( I_{(kR_1)} \right) \, \det \left( I_{(R_i \leftrightarrow R_1)} \right)

\displaystyle \, = \det \left( I_{(kR_1)} \right) \, \det \left( I_{(R_i \leftrightarrow R_1)} \right) \, \det \left( I_{(R_i \leftrightarrow R_1)} \right) (these are scalars in a field, so they commute)

\displaystyle \, = \det \left( I_{(kR_1)} \right) \, \det \left( I_{(R_i \leftrightarrow R_1)} \, I_{(R_i \leftrightarrow R_1)} \right)

\displaystyle \, = \det \left( I_{(kR_1)} \right) \, \det \left( I \right)

\displaystyle \, = \det \left( I_{(kR_1)} \right)

\displaystyle \, = k

Using similar identities we can show \det \left( I_{(R_i \leftrightarrow R_j)} \right) = -1 and \det \left( I_{kR_i + R_j} \right) = 1.  Now we know the determinants for all elementary matrices.  Since every square matrix is a product of elementary matrices, and we can split the determinant up across a product of matrices, we can even calculate the determinant of any square matrix based solely on the two properties given earlier.  (Note this isn’t necessarily an efficient way to calculate the determinant, just a possible way.)

At this point it shouldn’t seem too surprising that the two properties above are all we need to define a determinant: every square matrix is a product of elementary matrices, and we can “massage” elementary matrices into a form whose determinant we can calculate.  In coming posts we’ll see how to expand this to define the Laplace / cofactor expansion for the determinant; the relationship between determinants and inverses; and Cramer’s rule, which tells us how to compute a single coordinate in the solution vector to a system of equations.

About these ads

2 Comments »

  1. You could also define the determinant of a map V->V as the induced (scalar) map on the highest exterior powers. Or, an alternating n-tensor whose value on the identity is 1 (n being the dimension of V). I like the former way.

    Comment by Zygmund — June 16, 2009 @ 7:40 pm | Reply

  2. [...] Algebra, Linear Algebra — cjohnson @ 8:55 pm One easy consequence of our definition of determinant from last time is that any singular matrix must have determinant zero. Suppose is a singular [...]

    Pingback by Determinants Are Linear in Rows and Columns « Mathematics Prelims — June 16, 2009 @ 8:55 pm | Reply


RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

The Rubric Theme. Create a free website or blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: