Let’s suppose that is an matrix which is similar to a diagonal matrix, . This means there is an invertible (change-of-basis) matrix such that
Now since is a change of basis matrix, each of its columns gives the coordinates to a basis vector of some basis. Let’s call that basis and let through be the elements of that basis. Now, if we take the above equation and multiply by on the right, notice that
That is, the -th column of is equal to the -th column of , which is just times the -th column of . Since each column of is just a linear combination of the columns of , though, we have
This means that when we plug in the -th column of to the linear transformation represented by , we get back a multiple of that column. Calling the linear transformation , we have that
Vectors such as whose image under is just a multiple of the vector are called eigenvectors of . That multiple, the above, is called an eigenvalue of . These eigenvectors and eigenvalues are associated with a particular linear transformation, so when we talk about the eigenvectors and eigenvalues of a matrix, we really mean the eigenvectors and eigenvalues of the transformation represented by that matrix. Notice that this means that eigenvalues are independent of the chosen basis; since similar matrices represent the same transformation just with respect to different bases, similar matrices have the same eigenvalues.
We assumed that was similar to a diagonal matrix above, but this isn’t always true. If is similar to a diagonal matrix, say , then as we’ve just shown, the columns of are eigenvectors of . Since these form the columns of a non-singular matrix, the eigenvectors of form a basis for the vector space. Also, if the eigenvectors of form a basis, let’s take those basis vectors as columns of .
So a matrix is diagonalizable (similar to a diagonal matrix) if and only if its eigenvectors form a basis for the vector space.