The trace, determinant, and characteristic polynomial of a 2x2 Matrix all relate to the computation of a matrix's eigenvalues and eigenvectors. These can lead to a solution to the system of differential equations; in general, the computation of eigenvalues and eigenvectors can serve many purposes. However, when it comes to differential equations eigenvalues and eigenvectors are most often used to find straight-line solutions of linear systems.
We will first review how to find the trace, determinant, and characteristic polynomial of a 2x2 matrix. We will then use this review to relate the characteristic polynomial to the computation of eigenvalues and eigenvectors.
The trace of an nxn matrix A is the sum of the diagonal entries `A_11, A_22,...,A_(n n)`. So `"tr"(A)=sum A_(ii)`.
For the 2x2 matrix
`A = [[A_11 , A_12], [A_21 , A_22]]`,
the trace is given by `A_11 +A_22`.
As previously stated, the trace of a matrix is useful in determining the eigenvalues (`λ_i`) of the matrix. For any matrix, `sum λ_i= sum A_(ii) ="tr"(A)`. It is also a component used when determining the characteristic polynomial of a given matrix.
The determinant of a 2x2 matrix
`A = ((a, b), (c, d))`
is the number ad -bc. It is denoted det A.
It is important to note that matrices whose determinants are equal to zero are referred to as singular or degenerate matrices. Matrices whose determinants are not zero are referred to as non-singular. Given a matrix A where det A `!=` 0, the only equilibrium point for the linear system `frac(d bb "Y")(dt)` = `bb "AY"` is the origin.
Blanchard, Devaney, and Hall note the example (242):
`A = ((2, 1), (-4, 0.3))`
det A = (2)(0.3)-(1)(-4)= 4.6. Since det A `!=` 0, the only equilibrium point for this system is the origin (0, 0).
The characteristic polynomial of an nxn matrix `A` is a polynomial whose roots are the eigenvalues of the matrix `A`. It is defined as `det(A-λI)`, where `I` is the identity matrix. The coefficients of the polynomial are determined by theand of the matrix.
For a 2x2 matrix, the characteristic polynomial is `λ^2-("trace")λ+("determinant")`, so the eigenvalues `λ_(1,2)` are given by the quadratic formula:
To find eigenvalues, we use the formula:
`A vec(v) = lambda vec (v)`
where `A = ((a,b), (d,c))` and `vec(v)= ((x),(y))`
`((a,b), (d,c))((x),(y))= lambda ((x),(y))`, which can be written in components as
`ax + by = lambda x`
`cx + dy = lambda y`
We want to solve for the non-zero solution, such that the system becomes
`(a- lambda)x + by=0`
`cx + (d-lambda)y =0`
We can prove that given a matrix A whose determinant is not equal to zero, the only equilibrium point for the linear system is the origin, meaning that to solve the system above we take the determinant and set it equal to zero.
`det ((a-lambda,b), (c, d-lambda))= 0`
Every time we compute eigenvalues and eigenvectors we use this format, which can also be written as `det(A - lambda vec(I)) =0`, where I is the Identity matrix `vec(I)=((1, 0), (0, 1))`. Computation of `det(A - lambda vec(I)) =0` leads to the, where the roots of this polynomial are the eigenvalues of the matrix A.
`det(A - lambda vec(I))=det ((a-lambda, b), (c, d-lambda)) = (a-lambda)(d-lambda)-bc=0`, which expands to the quadratic polynomial
`lambda^(2) - (a+d)lambda +(ad-bc)=0.`
It is also useful to look at how the characteristic polynomial relates to the and of a matrix.
Given a matrix `A= ((a, b), (c, d))` and characteristic polynomial `lambda^2-(a+d)lambda +(ad-bc)` we also see that the characteristic polynomial is equivalent to `lambda^(2) -T lambda + D =0` where T=trace=a+d and D=determinant=`a*d-b*c`.
The characteristic polynomial always has two roots. These roots can be real or complex, and they do not have to be distinct. If the roots are complex we say that the matrix has complex eigenvalues. Otherwise, we say that the matrix has real eigenvalues.
You may come across a system where the two roots of the characteristic polynomial are the same real-valued number. Do not fret-we will review examples of cases like this, along with other examples, as we go along.
Another important thing to review is the quadratic formula, a formula that is so useful, but so often forgotten!
Quadratic formula =`-frac(b pm sqrt(b^2 -4ac))(2a)`
Given a quadratic polynomial: `x^2 +7x +3,`
Written abstractly, such a quadratic polynomial takes the form: `ax^2 +bx+c`
This is important to review because it allows us to efficiently find the roots of the characteristic polynomial, which are the eigenvalues, lambda. Just as trace and determinant relate to the characteristic polynomial, they also relate to the quadratic formula, which can be written as:
`lambda = frac(T pm sqrt(T^2-4*D))(2)`
This makes sense because the trace and determinant are the coefficients of `x^2 and x` in the characteristic polynomial.
Here are examples of how to solve for eigenvalues:
Let's begin with an example where we compute real eigenvalues:
Suppose we have the matrix:
`A = ((5,4),(3,2))`
`det(A - lambda I)= det ((5-lambda, 4),(3, 2-lambda))=(5-lambda)(2-lambda)-4*3=0`
The roots are:
`lambda = frac(7 pm sqrt(49-48))(2)`
`lambda = 4, 3`
Now let's take an example of where we compute repeated eigenvalues:
`A = ((7,1),(-4,3))`
`det(A - lambda I)= det ((7-lambda, 1),(-4, 3-lambda))=(7-lambda)(3-lambda)-(-4*1)=0`
The roots are:
`lambda_(1,2) = 5`
Now we will compute complex eigenvalues:
Before we start we should review what it means to have a complex number. "Complex numbers are numbers of the form x + iy, where x and y are real numbers and i is the 'imaginary number' `sqrt(-1)` " (Blanchard, Devaney, Hall, 291).
Consider the system where A = `((-2, -3), (3, -2))`
`det(A-lambda I) = det ((-2-lambda, -3), (3, -2-lambda)) = (-2-lambda)(-2-lambda)-(-3*3)=lambda^2+4 lambda +13 =0.`
The roots are:
`lambda = frac(-4 pm sqrt(-36))(2)`
We see that the `sqrt(-36)` is equal to 6i, such that the eigenvalues become:
`lambda = frac(-4 pm 6i)(2) = -2 pm 3i`
Given a matrix `A = ((a,b), (c,d))` and we know that `lambda` is an eigenvalue, we use the same equation from above `A vec(v) = lambda vec (v)` to solve for `vec(v)` of the form `vec(v) = ((x), (y))`. We notice that `A vec(v) = lambda vec(v)` turns into a system of linear equations:
`ax + by = lambda x`
`cx + dy = lambda y`
Because we have already solved for lambda, "we know that there is at least an entire line of eigenvectors (x, y) that satisfy this system of equations. This infinite number of eigenvectors means that the equations are redundant. That is, either the two equations are equivalent, or one of the equations is always satisfied" (Blanchard, Devaney, Hall, 266).
We will give an example to demonstrate what is meant by the statement above:
Suppose the matrix A = `((2, 2),(1,3))`
`det(A-lambda I) = (2-lambda)(3-lambda)-(2*1)=0`
`lambda^2-5 lambda+4 =0 `
`lambda = 1, 4 ` or `lambda_(1) = 4 , lambda_(2) =1`
Let's use `lambda_(2) ` in the equation:
`A((x),(y))= ((2, 2),(1,3)) ((x),(y)) = 1((x),(y))`
Rewritten in terms of components, the equation becomes
`2x + 2y = x`
`1x + 3y = y`
It is obvious that `-frac(1)(2) x = y` satisfies both equations, such that the eigenvector for `lambda_2 = ((1), (-frac(1)(2)))`
Now let's view an example where there are complex eigenvalues and a complex eigenvector:
Let's begin where we left off in the example from before where A = `((-2, -3), (3, -2))`
We found that eigenvalues were `lambda_(1) = -2 + 3i, lambda_(2) = -2 - 3i`
Let's take `lambda_(1)` and plug it into the equation,
`A((x),(y))= ((2, 2),(1,3)) ((x),(y)) = (-2+3i)((x),(y))`
As a system of equations we have:
`-2x - 3y = (-2 + 3i)x`
`3x - 2y = (-2 + 3i)y `
Which can be rewritten as:
`(-3i)x + 3y = 0`
`3x + (-3i)y = 0 `
Just as in the example above, the equations are redundant. We see that `(i)x= y ` and `vec(v) = ((1), (i))`
There are two notably special cases for eigenvalues and eigenvectors; these are when you have repeated eigenvalues or zero as an eigenvalue. These are notable because (as you would think) there is only one eigenvector associated with a repeated eigenvalue, which impacts how you set up the general solution for a system with repeated eigenvalues. Additionally, if zero is your eigenvalue, Blanchard, Devaney, and Hall state, "This case is important because it divides the linear systems with strictly positive eigenvalues (sources) and strictly negative eigenvalues (sinks) from those with one positive and one negative eigenvalue (saddles)" (319). If you do not understand what is meant by this, you may wish to review .
Blanchard, Paul, Robert L. Devaney, and Glen R. Hall. Differential Equations. 3rd ed. Belmont, CA: Thomson Brooks/Cole, 2006. Print.