Let’s suppose that is an matrix which is similar to a diagonal matrix, . This means there is an invertible (change-of-basis) matrix such that

Now since is a change of basis matrix, each of its columns gives the coordinates to a basis vector of some basis. Let’s call that basis and let through be the elements of that basis. Now, if we take the above equation and multiply by on the right, notice that

That is, the -th column of is equal to the -th column of , which is just times the -th column of . Since each column of is just a linear combination of the columns of , though, we have

This means that when we plug in the -th column of to the linear transformation represented by , we get back a multiple of that column. Calling the linear transformation , we have that

.

Vectors such as whose image under is just a multiple of the vector are called *eigenvectors* of . That multiple, the above, is called an *eigenvalue* of . These eigenvectors and eigenvalues are associated with a particular linear transformation, so when we talk about the eigenvectors and eigenvalues of a matrix, we really mean the eigenvectors and eigenvalues of the transformation represented by that matrix. Notice that this means that eigenvalues are independent of the chosen basis; since similar matrices represent the same transformation just with respect to different bases, similar matrices have the same eigenvalues.

We assumed that was similar to a diagonal matrix above, but this isn’t always true. If is similar to a diagonal matrix, say , then as we’ve just shown, the columns of are eigenvectors of . Since these form the columns of a non-singular matrix, the eigenvectors of form a basis for the vector space. Also, if the eigenvectors of form a basis, let’s take those basis vectors as columns of .

So a matrix is *diagonalizable* (similar to a diagonal matrix) if and only if its eigenvectors form a basis for the vector space.

### Like this:

Like Loading...

*Related*

So can I say from your post that an matrix is diagonalizable if and only if there are linearly independent eigent vectors $latex\{\beta_1,\cdots,\beta_n\}$ of .

Comment by watchmath — June 25, 2009 @ 9:38 pm |

Right. If you have linearly independent eigenvectors, just take those to be the columns of your matrix. When you multiply out , you’ll get a diagonal matrix with the eigenvalues of on the diagonal.

Comment by cjohnson — June 25, 2009 @ 10:16 pm |

[…] Filed under: Algebra, Linear Algebra — cjohnson @ 7:18 pm Last time we defined the eigenvalues and eigenvectors of a matrix, but didn’t really discuss how to actually calculate the eigenvalues or […]

Pingback by The Characteristic Polynomial « Mathematics Prelims — June 26, 2009 @ 7:18 pm |

Now I am a little disagree with this formulation

“So a matrix is diagonalizable (similar to a diagonal matrix) if and only if its eigenvectors form a basis for the vector space”

Since there are infinitely many eigenvectors, you need to tell which eigenvectors that form a basis or you can avoid that by saying that there are eigenvectors that form a basis.

Thanks

Comment by watchmath — June 28, 2009 @ 7:02 am |