(a) Problem 5 (a), (d), and (g)
The matrix
has rank two since it reduces to the identity matrix, so by reducing the following matrix
we can get the inverse on the right hand side.
The matrix
has rank three since it reduces to the identity matrix, so by reducing the following matrix
we can get the inverse on the right hand side.
The matrix
has rank four since it reduces to the identity matrix, so by reducing the following matrix
we can get the inverse on the right hand side.
and so has rank 3. Therefore T is invertible. So we can find the inverse by
so
From this we can get a formula for T−1.
and so has rank 2. So T is not invertible
and so has rank 3. Therefore T is invertible. So we can find the inverse by
so
From this we can get a formula for T−1.
and so has rank 3. Therefore T is invertible. So we can find the inverse by
so
From this we can get a formula for T−1.
and so has rank 3. Therefore T is invertible. So we can find the inverse by
so
From this we can get a formula for T−1.
From the coefficient matrix
we can reduce the following matrix
to
getting the inverse. We can then use the inverse to get the unique solution of the equation
From the coefficient matrix
we can reduce the following matrix
to
getting the inverse. We can then use the inverse to get the unique solution of the equation
for some basis , which for simplicity of computation, we will choose the standard basis . The matrix representation of T in the standard basis is
which we will augment with v and row reduce to determine if v is in the image of T.
Augmenting [T] with [v] = [] we have
which row reduces to
and so since the system is consistent, then v is in the image of T.
Augmenting [T] with [v] = [] we have
which row reduces to
and so since the system is consistent, then v is in the image of T.
False. Not closed under addition:
but
True. This is Theorem 4.1 [1, p. 200].
False. The determinant of a two-by-two matrix is nonzero if and only if it is invertible, Theorem 4.2 [1, p. 201].
False. It is the absolute value of the determinant specified.
True.
For u = and v = , the area of the parallelogram determined by u and v is
For u = and v = , the area of the parallelogram determined by u and v is
For u = and v = , the area of the parallelogram determined by u and v is
For u = and v = , the area of the parallelogram determined by u and v is
False. Not closed under addition since the n × n definition is equivalent to the two-by-two in the two dimensional case:
but
True. Theorem 4.4 [1, p. 215]
True. Corollary to Theorem 4.4 [1, p. 215]
True. Theorem 4.5 [1, p. 216]
False. A is invertible and therefore has nonzero determinant.
True.
which we can remove the multiple from each row one at a time, and k will be their product, 2(3)(7) = 42
which does not change the determinant. We can then add one-half of row one to both rows two and three, and this will not change the determinant. This results in the following matrix.
Now we simply take the −2 out of the first row and then swapping the second and third rows makes for a k of 2.
With this, the determinant is simply 14(11) = 154.
and now we can easily expand across the second row since the determinant has been unchanged.
which means that
which means that
if n is odd, and
if n is even.
False. Multiplying a row of the identity matrix by a scalar will result in the determinant of the resulting matrix being multiplied by the same scalar since the determinant is linear in a given row.
True. This is Theorem 4.7 [1, p. 223]
False. A matrix is invertible if and only if its determinant is nonzero.
True. If a n × n matrix has full rank then it is invertible, and therefore has nonzero determinant.
False. The n×n identity is equal to its transpose and can therefore they have equal determinant; they are not additive inverses.
True. The determinant of a matrix can be evaluated using cofactor expansion along any row and the determinant of a matrix is equal to the determinant of its transpose.
False. The coefficient matrix must have nonzero determinant since we will be dividing by the determinant.
False. Let
and b = t. This system then has a unique solution of x1 = 0 and x2 = 1, but with this alternate definition of Mk,
which would imply that x2 = 0, which is false.
so
then since det(A) = − 25 we have
Then the determinant is
where Brc1 is B without the rightmost column. Then we get
where Brc2 is B without the two rightmost columns. This can be done a total of n − k times until
is reached, but the matrix on the right hand side above is just A, so det(M) = det(A).
In the previous homework, we proved that T here is an isomophism (exercise 22 section 2.4). Due to this, M is an invertible matrix, and thus det(M)≠0.
In the case of n = 2, ∏ 0≤i<j≤1(cj − ci) = c1 − c0 is indeed the determinant for the two-by-two Vandermonde matrix
So assume that the Vandermonde determinant formula holds for all k less than n. By the
then subtracting the first row from the others
for each column except the leftmost, subtracting c0 times the column to its immediate left
and then factoring out the cj − c0 terms of each row j, we are left with
The matrix resulting from removing the first row and column is an n − 1 size Vandermonde matrix so we get
by our induction hypothesis.
False. The identity matrix has only one eigenvalue, 1.
True. Since the eigenspace of an eigenvalue is a vector space it must be closed under scalar multiplication which means that the space has infinite cardinality when the field in question is ℝ.
True. Any zero matrix has no eigenvectors.
True. Eigenvalues are nonzero by definition.
False. A multiple of an eigenvector is another eigenvector, so one must only choose two eigenvectors that are multiples of each other for a counter-example.
False. Regarding the characteristic polynomial, this would imply that the sum of two roots of a polynomial is also a root, which is not the case.
False. The identity operator has an eigenvalue of one on ANY vector space.
True. Theorem 5.1 [1, p. 246].
True. Let A and B be similar matrices. Then there is a Q such that A = Q−1BQ. From this we get
which informs us that the matrices (tI − A) and (tI − B) are similar. Since similar matrices have the same determinant and these two matrices are responsibile for the characteristic polynomial, then A and B have the same eigenvalues.
False. The matrices
are similar to each other by
and also have the same single eigenvalue of 1, but the first has an eigenvector of (0,1)t and the second (−1,2)t.
False. If v is an eigenvector for some matrix, then so is −v since the eigenspace of an eigenvalue is a vector space, but the sum of these two vectors is zero, which is not an eigenvector, by definition.
(b) Problem 2 (b), (e), and (f)
Because [T]β is a diagonal matrix, then β is a basis of eigenvectors of T.
Because [T]β is not diagonal, then β is not a basis consisting of eigenvectors of T.
Because [T]β is a diagonal matrix, then β is a basis of eigenvectors of T.
This has roots of 4 and − 1 which are the eigenvalues. For these eigenvalues we have
and therefore E4 = Span{(2,3)t} and E−1 = Span{(−1,1)t}. Since there are two distinct eigenvalues, (2,3)t,(−1,1)t are linearly independent and will form a basis. Thus
This has roots of 1, 2, and 3 which are the eigenvalues. For these eigenvalues we have
and therefore E1 = Span{t} E2 = Span{t} E3 = Span{t}. Since there are two distinct eigenvalues, t,t,t are linearly independent and will form a basis. Thus
This has roots of − 1 and 1 which are the eigenvalues. For these eigenvalues we have
and therefore E−1 = Span{t} and E1 = Span{t}. Since there are two distinct eigenvalues, t,t are linearly independent and will form a basis. Thus
This has roots of 1 and 0, but the first has multiplicity of 2. For these eigenvalues we have
and therefore E1 = Span{(0,1,0)t,(1,0,1)t} and E0 = Span{(1,4,2)t}. These eigenvectors will all be linearly independent and form a basis resulting in the following.
(d) Problem 4 (a), (f), (i), and (j)
Taking to be
the representation of T in the standard basis:
then the characteristic polynomial is
which has roots 3 and 4, and thus these are the eigenvalues. Since
and
then E3 = Span{} and E4 = Span{}. So
Taking to be
the representation of T in the standard basis is
which has roots 3 and 1, and thus these are the eigenvalues. Since
and
then E3 = Span{x} and E1 = Span{x − 2,x2 − 4,x3 − 8}. So
Taking to be
the representation of T in the standard basis is
which has roots − 1 and 1, and thus these are the eigenvalues. Since
and
then E−1 = Span and E1 = Span. So
Taking to be
the representation of T in the standard basis is
which has roots 5, − 1, and 1, and thus these are the eigenvalues. Since
and
then E5 = Span, E−1 = Span, and E1 = Span. So
Let A be similar to a scalar matrix λI. Then there exists a Q such that A = Q−1(λI)Q which implies that A = Q−1(λI)Q = λQ−1IQ = λQ−1Q = λI.
Let A be a diagonolizable matrix with only one eigenvalue. Then A can be diagonalized in the basis of its eigenvectors. The diagonal matrix D to which it will be similar will have eigenvalues of A on its diagonal, but there is only one eigenvalue, so D is a scalar matrix. Thus since A is similar to it, then by the previous proof, A is a scalar matrix also.
The matrix
is not diagonalizable by the previous subproblem, since it has only one eigenvalue (its diagonals are the same) but is not a scalar matrix.
Turns out that I proved this in problem 1(i) of this section... nice. Anyways, here it is again. Let A and B be similar matrices. Then there is a Q such that A = Q−1BQ. From this we get
which informs us that the matrices (tI − A) and (tI − B) are similar, so their determinants are the same. Since the determinants of these two similar matrices define the characteristic polynomial for A and B respectively, then their characteristic polynomials are also the same.
For any two basis γ, β of a vector space V , a linear operator T will have that [T]β is similar to [T]γ by way of a simple change of coordinates. Given this and the previous proof, the characteristic polynomial of T is independent of the choice of basis.
Because T is the transformation on Mn×n which transposes matrices, then the only viable candidates for eigenvalues for λ in At = λA are 1 and −1. Since we know that symmetric and anti-symmetric matrices exist in Mn×n then we know we can find an A such that At = A or At = −A. Thus ±1 are the only eigenvalues for T.
The eigenvectors corresponding to 1 would be the set of all symmetric matrices and the eigenvectors corresponding to −1 would be all anti-symmetric matrices.
Since all symmetric and anti-symmetric matrices in M2×2 have the form
respectively, then a basis for E1 will be
and for E−1 will be
Thus we can use these four vectors as a basis to diagonalize T. We’ll call this basis β and let their ordering be the order in which they appear here. Confirming that this basis will diagonalize T we have the following.
Generalizing what we did in the previous problem, a basis for E1 would be
where Aji is a matrix of all zeros except for ones at position i,j and position j,i. And a basis for E−1 would be
where Bji is a matrix with 1 at i,j and −1 at j,i. Then β would simply be the union of these two sets.