(a) Problem 5 (a), (d), and (g)
The matrix

has rank two since it reduces to the identity matrix, so by reducing the following matrix

we can get the inverse on the right hand side.

The matrix

has rank three since it reduces to the identity matrix, so by reducing the following matrix

we can get the inverse on the right hand side.

The matrix

has rank four since it reduces to the identity matrix, so by reducing the following matrix

we can get the inverse on the right hand side.

![[T ] = ([T(e1)] [T(e2)] [T (e3)] )
E ( E E [ 2E ] )
= ([− 1]E [− x+ 2)]E − x + 4x+ 2 E
− 1 2 2
= ( 0 − 1 4 )
0 0 − 1](homework039x.png)

and so has rank 3. Therefore T is invertible. So we can find the inverse by

so
![( − 1 − 2 − 10 )
([T] )−1 = [T−1] = ( 0 − 1 − 4 )
E E 0 0 − 1](homework0312x.png)
From this we can get a formula for T−1.

![( )
[T]E = [T(e1)]E [T(e2)]E [T (e3)]E
= ( [0]E [x +1]E [2(x + 1)x]E )
( )
( 0 1 0 )
= 0 1 2
0 0 0](homework0314x.png)

and so has rank 2. So T is not invertible
![[T] = ( [T (e1)] [T(e2)] [T(e3)] )
E ( E E E )
= ( [(1,− 1,1)]E) [(2,1,0)]E [(1,2,1)]E
1 2 1
= ( − 1 1 2 )
1 0 1](homework0316x.png)

and so has rank 3. Therefore T is invertible. So we can find the inverse by

so
![( ) ( )
16 − 13 12 1 1 − 2 3
([T]E )−1 = [T −1]E = ( 12 0 − 12 ) = - ( 3 0 − 3 )
− 16 13 12 6 − 1 2 3](homework0319x.png)
From this we can get a formula for T−1.

![[T ]E = ( [T(e1)]E [T(e2)]E [T (e3)]E )
( [2 ] )
= ( x + x+ 1 E) [− x+ 1]E [x+ 1]E
1 1 1
= ( 1 − 1 1 )
1 0 0](homework0321x.png)

and so has rank 3. Therefore T is invertible. So we can find the inverse by

so
![( ) ( )
−1 − 1 01 01 1 1 0 0 2
([T ]E) = [T ]E = ( 21 −21 0 ) = 2 ( 1 − 1 0 )
2 2 − 1 1 1 − 2](homework0324x.png)
From this we can get a formula for T−1.

![[T] = ( [T (e1 )] [T (e2 )] [T (e3 )] )
E ( V E V E V E )
= ( [(1,1,1)]E )[(− 1,0,1)]E [(1,0,1)]E
1 − 1 1
= ( 1 0 0 )
1 1 1](homework0326x.png)

and so has rank 3. Therefore T is invertible. So we can find the inverse by

so
![( 0 1 0 )
([T] )−1 = [T−1] = ( − 1 0 1 )
E E 21 − 1 21
2 2](homework0329x.png)
From this we can get a formula for T−1.

From the coefficient matrix

we can reduce the following matrix

to

getting the inverse. We can then use the inverse to get the unique solution of the equation

From the coefficient matrix

we can reduce the following matrix

to

getting the inverse. We can then use the inverse to get the unique solution of the equation

(ℝ3, ℝ3), determining if a specifed v ∈ ℝ3 is in R(T) is to determine if there exists an x ∈ ℝ3 such that
T(x) = v. In other words, it is to determine if there is an x such that
![[T (x )]B = [T]B[x]B = [v]B](homework0339x.png)
for some basis
, which for simplicity of computation, we will choose the standard basis
. The matrix representation of T
in the standard basis is
![( )
( ) 1 1 0
[T]E = [T(e1)]E [T (e2)]E [T(e3)]E = ( 0 1 − 2 )
1 0 2](homework0340x.png)
which we will augment with v and row reduce to determine if v is in the image of T.
Augmenting [T]
with [v]
= [
]
we have

which row reduces to

and so since the system is consistent, then v is in the image of T.
Augmenting [T]
with [v]
= [
]
we have

which row reduces to

and so since the system is consistent, then v is in the image of T.
False. Not closed under addition:

but

True. This is Theorem 4.1 [1, p. 200].
False. The determinant of a two-by-two matrix is nonzero if and only if it is invertible, Theorem 4.2 [1, p. 201].
False. It is the absolute value of the determinant specified.
True.






For u =
and v =
, the area of the parallelogram determined by u and v is

For u =
and v =
, the area of the parallelogram determined by u and v is

For u =
and v =
, the area of the parallelogram determined by u and v is

For u =
and v =
, the area of the parallelogram determined by u and v is

False. Not closed under addition since the n × n definition is equivalent to the two-by-two in the two dimensional case:

but

True. Theorem 4.4 [1, p. 215]
True. Corollary to Theorem 4.4 [1, p. 215]
True. Theorem 4.5 [1, p. 216]
False. A is invertible and therefore has nonzero determinant.
True.
times the third row to the second and the determinant will remain unchanged. This results
in

which we can remove the multiple from each row one at a time, and k will be their product, 2(3)(7) = 42

which does not change the determinant. We can then add one-half of row one to both rows two and three, and this will not change the determinant. This results in the following matrix.

Now we simply take the −2 out of the first row and then swapping the second and third rows makes for a k of 2.



With this, the determinant is simply 14(11) = 154.


and now we can easily expand across the second row since the determinant has been unchanged.


which means that


which means that

det(−A) = (−1)n det(A) since we would extract a negative one from each row, one at a time, and resulting
in the product of n ones. So given this, det(−A) = det(A) when n is an even number to make (−1)n be
one.
swaps for odd n and
for even, then

if n is odd, and

if n is even.
False. Multiplying a row of the identity matrix by a scalar will result in the determinant of the resulting matrix being multiplied by the same scalar since the determinant is linear in a given row.
True. This is Theorem 4.7 [1, p. 223]
False. A matrix is invertible if and only if its determinant is nonzero.
True. If a n × n matrix has full rank then it is invertible, and therefore has nonzero determinant.
False. The n×n identity is equal to its transpose and can therefore they have equal determinant; they are not additive inverses.
True. The determinant of a matrix can be evaluated using cofactor expansion along any row and the determinant of a matrix is equal to the determinant of its transpose.
False. The coefficient matrix must have nonzero determinant since we will be dividing by the determinant.
False. Let

and b =
t. This system then has a unique solution of x1 = 0 and x2 = 1, but with this alternate definition of
Mk,

which would imply that x2 = 0, which is false.

so


then since det(A) = − 25 we have




be such that for some A ∈ Mk×k
and B ∈ Mn−k×n−k

Then the determinant is

where Brc1 is B without the rightmost column. Then we get

where Brc2 is B without the two rightmost columns. This can be done a total of n − k times until

is reached, but the matrix on the right hand side above is just A, so det(M) = det(A).
![( )
[T ]γβ = [T(1)]γ [T (x)]γ [T (x2)]γ ⋅⋅⋅ [T(xn)]γ
= ( [(1,⋅⋅⋅,1)]γ [(c0,⋅⋅⋅,cn)]γ [(c20,⋅⋅⋅,c2n)]γ ⋅⋅⋅ [(cn0,⋅⋅⋅,cnn)]γ )
( 2 n )
| 1 c0 c02 ⋅⋅⋅ c0n |
= || 1. c1. c1. ⋅⋅⋅ c1. ||
( .. .. .. ... .. )
1 cn c2n ⋅⋅⋅ cnn](homework03105x.png)
In the previous homework, we proved that T here is an isomophism (exercise 22 section 2.4). Due to this, M is an invertible matrix, and thus det(M)≠0.
In the case of n = 2, ∏ 0≤i<j≤1(cj − ci) = c1 − c0 is indeed the determinant for the two-by-two Vandermonde matrix

So assume that the Vandermonde determinant formula holds for all k less than n. By the

then subtracting the first row from the others

for each column except the leftmost, subtracting c0 times the column to its immediate left

and then factoring out the cj − c0 terms of each row j, we are left with

The matrix resulting from removing the first row and column is an n − 1 size Vandermonde matrix so we get

by our induction hypothesis.
False. The identity matrix has only one eigenvalue, 1.
True. Since the eigenspace of an eigenvalue is a vector space it must be closed under scalar multiplication which means that the space has infinite cardinality when the field in question is ℝ.
True. Any zero matrix has no eigenvectors.
True. Eigenvalues are nonzero by definition.
False. A multiple of an eigenvector is another eigenvector, so one must only choose two eigenvectors that are multiples of each other for a counter-example.
False. Regarding the characteristic polynomial, this would imply that the sum of two roots of a polynomial is also a root, which is not the case.
False. The identity operator has an eigenvalue of one on ANY vector space.
True. Theorem 5.1 [1, p. 246].
True. Let A and B be similar matrices. Then there is a Q such that A = Q−1BQ. From this we get

which informs us that the matrices (tI − A) and (tI − B) are similar. Since similar matrices have the same determinant and these two matrices are responsibile for the characteristic polynomial, then A and B have the same eigenvalues.
False. The matrices

are similar to each other by

and also have the same single eigenvalue of 1, but the first has an eigenvector of (0,1)t and the second (−1,2)t.
False. If v is an eigenvector for some matrix, then so is −v since the eigenspace of an eigenvalue is a vector space, but the sum of these two vectors is zero, which is not an eigenvector, by definition.
(b) Problem 2 (b), (e), and (f)
Because [T]β is a diagonal matrix, then β is a basis of eigenvectors of T.
![( ) ( ) ( − 2 0 )
[T]β = [T (3+ 4x)]β [T(2+ 3x)]β = [− 6− 8x]β [− 6 − 9x]β = 0 − 3](homework03115x.png)
Because [T]β is not diagonal, then β is not a basis consisting of eigenvectors of T.
![( )
[T]β = [T (x3 − x+ 1)]β [T(x2 + 1)]β [T(1)]β [T(x2 + x)]β
= ( [− x3 + x− 1]β [x3 − x2 − x]β [x2]β [− x2 − x ]β )
( )
| − 1 1 0 0 |
= |( 0 − 1 1 0 |)
0 0 − 1 0
0 0 0 − 1](homework03116x.png)
Because [T]β is a diagonal matrix, then β is a basis of eigenvectors of T.
![( [ ( ) ] [ ( ) ] [ ( ) ] [ ( ) ] )
[T]β = T 1 0 T − 1 2 T 1 0 T − 1 0
( [ ( 1 0 ) ]β [( 0 0)] β[ ( 2 )0] β[ ( 0) 2] ) β
− 3 0 − 1 2 1 0 − 1 0
= − 3 0 β 0 0 β 2 0 β 0 2 β
( )
| − 3 0 0 0 |
= |( 0 1 0 0 |)
0 0 1 0
0 0 0 1](homework03117x.png)

This has roots of 4 and − 1 which are the eigenvalues. For these eigenvalues we have

and therefore E4 = Span{(2,3)t} and E−1 = Span{(−1,1)t}. Since there are two distinct eigenvalues, (2,3)t,(−1,1)t are linearly independent and will form a basis. Thus


This has roots of 1, 2, and 3 which are the eigenvalues. For these eigenvalues we have


and therefore E1 = Span{
t} E2 = Span{
t} E3 = Span{
t}. Since there are two
distinct eigenvalues,
t,
t,
t are linearly independent and will form a basis.
Thus


This has roots of − 1 and 1 which are the eigenvalues. For these eigenvalues we have

and therefore E−1 = Span{
t} and E1 = Span{
t}. Since there are two distinct eigenvalues,
t,
t are linearly independent and will form a basis. Thus


This has roots of 1 and 0, but the first has multiplicity of 2. For these eigenvalues we have

and therefore E1 = Span{(0,1,0)t,(1,0,1)t} and E0 = Span{(1,4,2)t}. These eigenvectors will all be linearly independent and form a basis resulting in the following.

(d) Problem 4 (a), (f), (i), and (j)
Taking
to be

the representation of T in the standard basis:
![( )
( ) ( ) − 2 3
[T ]E = [T (e1)]E [T(e2)]E = [(− 2,− 10)]E [(3,9) ]E = − 10 9](homework03142x.png)
then the characteristic polynomial is
![( )
det(tI − [T ]E) = det t+ 2 − 3 = t2 − 7t+ 12 = (t− 4)(t− 3)
10 t− 9](homework03143x.png)
which has roots 3 and 4, and thus these are the eigenvalues. Since

and

then E3 = Span{
} and E4 = Span{
}. So

Taking
to be

the representation of T in the standard basis is
![( )
[T]E = [T(e1)]E [T(e2)]E [T (e3)]E [T (e4)]E
= ( [x + 1]E [3x ]E [x2 + 4x] [x3 + 8x] )
( ) E E
| 1 0 0 0 |
= |( 1 3 4 8 |)
0 0 1 0
0 0 0 1](homework03150x.png)
![( )
| t− 1 0 0 0 |
det(tI − [T ]E) = det|( − 1 t − 3 − 4 − 8 |) = t4 − 6t3 + 12t2 − 10t+ 3 = (t− 3)(t− 1)3
0 0 t− 1 0
0 0 0 t− 1](homework03151x.png)
which has roots 3 and 1, and thus these are the eigenvalues. Since

and

then E3 = Span{x} and E1 = Span{x − 2,x2 − 4,x3 − 8}. So

Taking
to be

the representation of T in the standard basis is
![( )
[T]E = ( [T[ ((e1)]E [T)(e]2)]E[([T(e3)]E) ][T (e4[)(]E ) ] [( ) ] )
0 0 0 0 1 0 0 1
= 1 0 E 0 1 E 0 0 E 0 0 E
( 0 0 1 0)
| 0 0 0 1|
= |( 1 0 0 0|)
0 1 0 0](homework03156x.png)
![( t 0 − 1 0 )
| 0 t 0 − 1 |
det(tI − [T ]E) = det|( − 1 0 t 0 |) = t4 − 2t2 + 1 = (t − 1)2(t+ 1)2
0 − 1 0 t](homework03157x.png)
which has roots − 1 and 1, and thus these are the eigenvalues. Since

and

then E−1 = Span
and E1 = Span
. So

Taking
to be

the representation of T in the standard basis is
![( )
[T]E = ( [T[ ((e1)]E [T)(e]2)]E[([T(e3)]E) ][T (e4[)(]E ) ] [( ) ] )
= 3 0 0 0 0 1 2 0
0 2 E 1 0 E 0 0 E 0 3 E
( 3 0 0 2)
| 0 0 1 0|
= |( 0 1 0 0|)
2 0 0 3](homework03164x.png)
![( t− 3 0 0 − 2 )
|| 0 t − 1 0 || 4 3 2 2
det(tI − [T ]E) = det ( 0 − 1 t 0 ) = t − 6t + 4t + 6t− 5 = (t− 5)(t− 1)(t+ 1)
− 2 0 0 t− 3](homework03165x.png)
which has roots 5, − 1, and 1, and thus these are the eigenvalues. Since


and

then E5 = Span
, E−1 = Span
, and E1 = Span
.
So

Let A be similar to a scalar matrix λI. Then there exists a Q such that A = Q−1(λI)Q which implies that A = Q−1(λI)Q = λQ−1IQ = λQ−1Q = λI.
Let A be a diagonolizable matrix with only one eigenvalue. Then A can be diagonalized in the basis of its eigenvectors. The diagonal matrix D to which it will be similar will have eigenvalues of A on its diagonal, but there is only one eigenvalue, so D is a scalar matrix. Thus since A is similar to it, then by the previous proof, A is a scalar matrix also.
The matrix

is not diagonalizable by the previous subproblem, since it has only one eigenvalue (its diagonals are the same) but is not a scalar matrix.
Turns out that I proved this in problem 1(i) of this section... nice. Anyways, here it is again. Let A and B be similar matrices. Then there is a Q such that A = Q−1BQ. From this we get

which informs us that the matrices (tI − A) and (tI − B) are similar, so their determinants are the same. Since the determinants of these two similar matrices define the characteristic polynomial for A and B respectively, then their characteristic polynomials are also the same.
For any two basis γ, β of a vector space V , a linear operator T will have that [T]β is similar to [T]γ by way of a simple change of coordinates. Given this and the previous proof, the characteristic polynomial of T is independent of the choice of basis.

Because T is the transformation on Mn×n
which transposes matrices, then the only viable candidates for
eigenvalues for λ in At = λA are 1 and −1. Since we know that symmetric and anti-symmetric matrices exist in
Mn×n
then we know we can find an A such that At = A or At = −A. Thus ±1 are the only eigenvalues for
T.
The eigenvectors corresponding to 1 would be the set of all symmetric matrices and the eigenvectors corresponding to −1 would be all anti-symmetric matrices.
Since all symmetric and anti-symmetric matrices in M2×2
have the form

respectively, then a basis for E1 will be

and for E−1 will be

Thus we can use these four vectors as a basis to diagonalize T. We’ll call this basis β and let their ordering be the order in which they appear here. Confirming that this basis will diagonalize T we have the following.
![( [ ( )] [ ( ) ] [ ( )] [ ( )] )
[T ] = T 1 0 T 0 0 T 0 1 T 0 1
β 0 0 β 0 1 β 1 0 β − 1 0 β
( [( 1 0 ) ] [( 0 0) ] [( 0 1 )] [ ( 0 − 1 ) ] )
= 0 0 0 1 1 0 1 0
( β) β β β
1 0 0 0
= || 0 1 0 0 ||
( 0 0 1 0 )
0 0 0 − 1](homework03182x.png)
Generalizing what we did in the previous problem, a basis for E1 would be

where Aji is a matrix of all zeros except for ones at position i,j and position j,i. And a basis for E−1 would be

where Bji is a matrix with 1 at i,j and −1 at j,i. Then β would simply be the union of these two sets.