Mathematics Prelims

June 11, 2009

Column Space and Row Space

Filed under: Algebra,Linear Algebra — cjohnson @ 8:06 pm

Suppose we have a system of equations with m equations and n unknowns.  How can we tell if this system has a solution?  Writing the system as Ax = b, we see that the left-hand side gives us a particular linear combination of the columns of A.  That is, the system Ax = b can be thought of as

\displaystyle x_1 \left[ \begin{array}{c} a_{11} \\ a_{21} \\ \vdots \\ a_{m1} \end{array} \right] + x_2 \left[ \begin{array}{c} a_{12} \\ a_{22} \\ \vdots \\ a_{m2} \end{array} \right] + ... + x_n \left[\begin{array}{c} a_{1n} \\ a_{2n} \\ \vdots \\ a_{mn} \end{array} \right] = \left[ \begin{array}{c} b_1 \\ b_2 \\ \vdots \\ b_m \end{array} \right]

And so we see that the system has a solution if and only if b can be expressed as a linear combination of the columns of A.  In fact, taking the set of all possible b vectors for which Ax = b has a solution, we have a vector space called the column space of A, which we’ll normally denote \text{CS}(A).  There are a couple of ways we can think of the column space:

  1. The set of all linear combinations of the columns of A
  2. The set of all vectors b such that Ax = b has a solution
  3. The set of all Ax for any n \times 1 vector x

These are all equivalent, but sometimes it may be more helpful to think of the column space in one way than another.

A similar concept is that of the row space, which is the set of all linear combinations of the rows of A and denoted \text{RS}(A).  Notice that \text{RS}(A) = \text{CS}(A^T).

Now suppose that we multiply A by a matrix Q on the right.  How might the column space of AQ differ from A?  Using the third characterization of a column space above, and assuming Q is n \times p, we have that

\displaystyle \text{CS}(AQ) = \{ (AQ)X : X \in \mathcal{F}_{p \times 1} \}

\displaystyle = \{ A(QX) : X \in \mathcal{F}_{p \times 1} \}

\displaystyle \subseteq \{ AY : Y \in \mathcal{F}_{n \times 1} \}

So the column space of AQ is contained in the column space of A, and in particular \dim(\text{CS}(AQ)) \leq \dim(\text{CS}(A)).  The above is telling us that linear combinations of the columns of AQ are actually linear combinations of the columns of A.  One way to see this is to realize that each column of AQ is actually a linear combination of combination of columns of A.  Letting A_{*i} denote the i-th column of A, we can write AQ as follows.

\displaystyle AQ = \left[ \left. \sum_{i=1}^n q_{i,1} A_{*i} \, \right| \, \left. \sum_{i=1}^n q_{i,2} A_{*i} \, \right| \, ... \, \left| \, \sum_{i=1}^n q_{i,p} A_{*i} \right. \right]

Now if we take a linear combination of these columns we have

\displaystyle c_1 \left( \sum_{i=1}^n q_{i,1} A_{*i} \right) + ... + c_p \left( \sum_{i=1}^n q_{i,p} A_{*i} \right)

\displaystyle = \sum_{j=1}^p c_j \left( \sum_{i=1}^n q_{i,j} A_{*i} \right)

\displaystyle = \sum_{j=1}^p \sum_{i=1}^n c_j q_{i,j} A_{*i}

\displaystyle = \sum_{i=1}^n \left( \sum_{j=1}^p c_j q_{i,j} \right) A_{*i}

Our choice of the c_j values is arbitrary — they can be whatever we’d like them to be — but the q_{i,j} are fixed since they’re entries in Q.  This restricts the possible linear combinations of the rows, so \text{CS}(AQ) \subseteq \text{CS}(A).  Repeating the same argument but using rows instead of columns, or just using the above argument with \text{RS}(A) = \text{CS}(A^T), we have that \text{RS}(PA) \subseteq \text{RS}(A).  Notice if Q (or P) is non-singular, then we have CS(A) \subseteq CS((AQ)Q^{-1}).  Combined with the above we have equality when P or Q is non-singular.

Now what we’d like to show is that the dimension of the row space and the dimension of the column space are actually equal for any matrix A.  Simply pick P to be the nonsingular matrix so that PA is A in row-reduced echelon form, and pick Q to be the nonsingular matrix so that AQ is A in column-reduced echelon form.  Notice then

PAQ = \left[ \begin{array}{cc} I_r & 0 \\ 0 & 0 \end{array} \right]

where I_r is the r \times r identity matrix for some r.  Since matrix multiplication is associative, PAQ = (PA)Q.  Putting the matrix into row-reduced echelon form with the PA moves all the non-zero rows to the top with each row having a leading one and the rest of that column zeroed out.  When we multiply this by Q we put the matrix into column-reduced form which moves all the non-zero columns to the left and zeros out everything on in the rows with leading ones.  With this matrix we clearly have \dim(\text{CS}(PAQ)) = \dim(\text{RS}(PAQ)) = r.  We can show that when P and Q are both nonsingular that \dim(\text{CS}(A)) = \dim(\text{CS}(PAQ)), using the rank-nullity theorem which we’ve yet to prove.

Advertisements

1 Comment »

  1. […] up from last time, we have if and are the non-singular matrices such that , […]

    Pingback by The Rank-Nullity Theorem « Mathematics Prelims — June 11, 2009 @ 10:06 pm | Reply


RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Blog at WordPress.com.

%d bloggers like this: