Last time we defined the eigenvalues and eigenvectors of a matrix, but didn’t really discuss how to actually calculate the eigenvalues or eigenvectors; we said that if it so happened that your matrix was similar to a diagonal matrix, the non-zero entries of the diagonal matrix were the eigenvalues, and the columns of the change-of-basis matrix were the eigenvectors. Now we’re going to discuss how to find the eigenvalues using the matrix’s *characteristic polynomial*.

Notice that if is an eigenvalue of with associated eigenvector we have the following.

Of course, satisfies this equation, but that’s a trivial solution. For any other, non-trivial, solution we’d require that is non-singular, and so . Thus if is an eigenvalue of , we must have .

Now suppose that is such that . Then there is a non-trivial solution to , so , and is an eigenvalue. We’ve shown that is an eigenvalue of if and only if . Furthermore, is a polynomial in (this is obvious if is , and inductively we can show that this is true for matrices). This means that with the characteristic polynomial, the problem of finding eigenvalues is reduced to finding the roots of a polynomial.

As an example, suppose

Then the characterstic polynomial is

So we see that the eigenvalues are , , and (notice the last two are complex conjugates of one another).

Now, once we’ve found the eigenvalues, the next step is to find the eigenvectors. Since

what we want is to find the nullspace of , since these are all the vectors that will take to zero. In our particular example, for ,

Now we take the row-reduced echelon form of this matrix, since it shares the same null space:

This tells us that

So the eigenvectors associated with the eigenvalue are the multiples of . We’d repeat the above process with to find the other eigenvectors.