Math 312: Linear Algebra

Homework 3
Lawrence Tyler Rush
<me@tylerlogic.com>

August 2, 2012
http://coursework.tylerlogic.com/math312/homework03

1 Section 3.2


(a) Problem 5 (a), (d), and (g)


— (a) —

The matrix

(      )
   1 2
   1 1

has rank two since it reduces to the identity matrix, so by reducing the following matrix

(  1  2 1  0 )
   1  1 0  1

we can get the inverse on the right hand side.

(               )
  1  0  − 1   2
  0  1   1  − 1

— (d) —

The matrix

(            )
   0 − 2   4
(  1   1  − 1 )
   2   4  − 5

has rank three since it reduces to the identity matrix, so by reducing the following matrix

( 0  − 2   4  1  0  0)
( 1    1  − 1 0  1  0)
  2    4  − 5 0  0  1

we can get the inverse on the right hand side.

(                      )
  1  0  0  − 12   3  − 1
( 0  1  0    32  − 4  2 )
  0  0  1    1  − 2  1

— (g) —

The matrix

(                  )
    1    2   1   0
||   2    5   5   1 ||
(  − 2 − 3   0   3 )
    3    4 − 2  − 3

has rank four since it reduces to the identity matrix, so by reducing the following matrix

(   1    2   1    0 1  0  0  0 )
|   2    5   5    1 0  1  0  0 |
|(  − 2 − 3   0    3 0  0  1  0 |)
    3    4  − 2 − 3 0  0  0  1

we can get the inverse on the right hand side.

(                              )
   1 0  0  0  − 51  15   7  12
||  0 1  0  0   31  − 9  − 4 − 7 ||
(  0 0  1  0  − 10   3   1   2 )
   0 0  0  1   − 3   1   1   1

(b) Problem 6 (a)-(e)


— (a) —

[T ]  =   ([T(e1)]  [T(e2)]   [T (e3)]  )
  E      (      E        E  [    2E      ]  )
     =   ([− 1]E  [− x+ 2)]E    − x + 4x+ 2 E
            − 1  2    2
     =   (   0  − 1   4 )
             0   0  − 1
This row reduces as
(             )    (          )
  − 1   2   2         1  0  0
(   0 − 1   4 )  ⇝ (  0  1  0 )
    0   0  − 1        0  0  1

and so has rank 3. Therefore T is invertible. So we can find the inverse by

(                       )    (                       )
   − 1   2   2  1  0  0        1  0  0  − 1 − 2  − 10
(   0  − 1   4  0  1  0 ) ⇝  ( 0  1  0   0  − 1   − 4 )
    0    0  − 1 0  0  1        0  0  1   0    0   − 1

so

                  ( − 1  − 2 − 10 )
([T] )−1 = [T−1] = (   0  − 1  − 4 )
   E          E       0   0   − 1

From this we can get a formula for T1.

T−1(a+bx+cx2 ) = aT−1(1)+bT− 1(x)+cT −1(x2) = a(− 1)+b(− x− 2)+c (− x2− 4x− 10) = − cx2− bx− 4cx− a− 2b− 10c

— (b) —

         (                          )
[T]E  =     [T(e1)]E  [T(e2)]E  [T (e3)]E
     =   ( [0]E  [x +1]E  [2(x + 1)x]E )
         (         )
         (  0  1 0 )
     =      0  1 2
            0  0 0
This row reduces as
(         )    (         )
( 0  1  0 )    (  0  1  0)
  0  1  2   ⇝     0  0  1
  0  0  0         0  0  0

and so has rank 2. So T is not invertible

— (c) —

[T]   =  ( [T (e1)]   [T(e2)]  [T(e3)]  )
   E     (       E       E        E        )
      =  ( [(1,− 1,1)]E) [(2,1,0)]E  [(1,2,1)]E
             1  2  1
      =  (  − 1 1  2 )
             1  0  1
This row reduces as
(           )    (         )
    1  2  1         1 0  0
(  − 1 1  2 ) ⇝  (  0 1  0 )
    1  0  1         0 0  1

and so has rank 3. Therefore T is invertible. So we can find the inverse by

(   1  2  1 1  0  0 )    (  1  0  0   1  − 1   1 )
( − 1  1  2 0  1  0 )  ⇝ (  0  1  0   61    30  −2 1 )
    1  0  1 0  0  1         0  0  1  −2 1    1   21
                                      6    3   2

so

                  (               )     (              )
                       16 − 13    12     1     1  − 2   3
([T]E )−1 = [T −1]E = (    12   0  − 12 ) = - (   3   0  − 3 )
                     − 16   13    12     6    − 1  2    3

From this we can get a formula for T1.

 − 1                                    −1           −1           − 1
T  (a(1,0,0) + b(0,1,0) + c(0,0,1)) =  aT(  (1,0,0)) + bT( (0,1,0)) + cT( (0,0,1))
                                  =   a 1 , 1,− 1 + b − 1,0, 1 + c  1,− 1, 1
                                        6  2  6         3   3       2   2 2
                                      1
                                  =   6 (a− 2b +3c,3a− 3c,− a + 2b+ 3c)

— (d) —

[T ]E  =   ( [T(e1)]E  [T(e2)]E  [T (e3)]E )
         ( [2       ]                     )
     =   ( x  + x+ 1 E)  [− x+ 1]E  [x+ 1]E
            1   1  1
     =   (  1 − 1  1 )
            1   0  0
This row reduces as
(  1   1  1 )    (  1 0  0 )
(  1 − 1  1 ) ⇝  (  0 1  0 )
   1   0  0         0 0  1

and so has rank 3. Therefore T is invertible. So we can find the inverse by

(                   )    (                      )
  1    1 1  1  0  0         1  0  0  0   0    1
( 1  − 1 1  0  1  0 ) ⇝  (  0  1  0  12  − 12   0 )
  1    0 0  0  0  1         0  0  1  12   12  − 1

so

                  (             )     (            )
     −1    − 1       01   01    1     1   0    0   2
([T ]E)  = [T  ]E = (  21  −21    0 ) = 2 ( 1  − 1   0 )
                     2   2  − 1         1    1  − 2

From this we can get a formula for T1.

T−1(a1+ bx +cx2)  =  aT −1(1) +bT −1(x )+ cT −1(x2)
                       (        )    (         )   (        )
                  =  a  1x2 + 1x  + b  1x2 − 1 x + c − x2 + 1
                        2     2        2    2
                  =  1 ax2 + 1bx2 − cx2 + 1ax− 1bx+ c
                     2      2           2     2

— (e) —

[T]   =  ( [T (e1  )]   [T (e2  )]   [T (e3  )]  )
   E     (     V  E      V  E      V  E    )
      =  ( [(1,1,1)]E )[(− 1,0,1)]E  [(1,0,1)]E
            1  − 1 1
      =  (  1   0  0 )
            1   1  1
This row reduces as
(           )    (         )
   1 − 1  1         1 0  0
(  1   0  0 ) ⇝  (  0 1  0 )
   1   1  1         0 0  1

and so has rank 3. Therefore T is invertible. So we can find the inverse by

(                   )    (                      )
  1  − 1 1  1  0  0         1  0  0   01    1  01
( 1    0 0  0  1  0 ) ⇝  (  0  1  0 − 21    0  21 )
  1    1 1  0  0  1         0  0  1   2  − 1  2

so

                  (   0   1  0 )
([T] )−1 = [T−1] = ( − 1   0  1 )
   E          E       21  − 1 21
                      2      2

From this we can get a formula for T1.

T−1(a(1,0,0) + b(0,1,0) + c(0,0,1)) =  aT −1(1,0,0) + bT−1(0,1,0) + cT−1 (0,0,1)
                                       (    1 1)                (  1 1)
                                  =  a  0,− 2,2  + b(1,0,− 1) +c 0,2,2

                                  =   1(2b,− a+ c,a− 2b +c)
                                      2

2 Section 3.3


(a) Problem 4


— (a) —

From the coefficient matrix

    (      )
A =    1 3
       2 5

we can reduce the following matrix

        (  1 3  1  0 )
(A|I2) =   2 5  0  1

to

          (              )
    −1      1  0  − 5   3
(I2|A  ) =   0  1   2  − 1

getting the inverse. We can then use the inverse to get the unique solution of the equation

    (   )    (         ) (   )    (      )
A −1   4   =   − 5   3      4   =   − 11
       3         2  − 1     3          5

— (b) —

From the coefficient matrix

    (            )
    (  1   2  − 1 )
A =    1   1   1
       2 − 2   1

we can reduce the following matrix

        ( 1    2 − 1  1  0 0 )
(A |I3) = ( 1    1   1  0  1 0 )
          2  − 2   1  0  0 1

to

          (  1  0  0   1  0    1 )
(I3|A −1) = ( 0  1  0   31  1  − 32 )
             0  0  1 − 94  32  − 91
                       9  3    9

getting the inverse. We can then use the inverse to get the unique solution of the equation

    (   )    (  (            ) )  (   )    (     )
      5        1    3  0   3        5          3
A− 1( 1 )  = ( -(   1  3  − 2 ) ) ( 1 )  = (   0 )
      4        9   − 4 6  − 1       4         − 2

(b) Problem 8


For the given T L(3, 3), determining if a specifed v 3 is in R(T) is to determine if there exists an x 3 such that T(x) = v. In other words, it is to determine if there is an x such that
[T (x )]B = [T]B[x]B = [v]B

for some basis B, which for simplicity of computation, we will choose the standard basis E. The matrix representation of T in the standard basis is

                                     (           )
      (                          )     1  1    0
[T]E =  [T(e1)]E  [T (e2)]E  [T(e3)]E  =  ( 0  1  − 2 )
                                       1  0    2

which we will augment with v and row reduce to determine if v is in the image of T.

— (a) —

Augmenting [T]E with [v]E = [(1,3, − 2)]E we have

(               )
( 1  1   0    1 )
  0  1  − 2   3
  1  0   2  − 2

which row reduces to

( 1  0   2  − 2 )
( 0  1  − 2   3 )
  0  0   0    0

and so since the system is consistent, then v is in the image of T.

— (b) —

Augmenting [T]E with [v]E = [(2,1,1)]E we have

(  1  1   0  2 )
(  0  1  − 2 1 )
   1  0   2  1

which row reduces to

(              )
   1  0   2  1
(  0  1  − 2 1 )
   0  0   0  0

and so since the system is consistent, then v is in the image of T.

3 Section 4.1


(a) Problem 1


— (a) —

False. Not closed under addition:

   (      )      (         )
det   1  0   + det   − 1  0    = 1+ 1 = 2
     0  1            0  − 1

but

   ((  1  0)    ( − 1   0 ) )      ( 0  0 )
det    0  1   +     0  − 1    = det   0  0   = 0

— (b) —

True. This is Theorem 4.1 [1, p. 200].

— (c) —

False. The determinant of a two-by-two matrix is nonzero if and only if it is invertible, Theorem 4.2 [1, p. 201].

— (d) —

False. It is the absolute value of the determinant specified.

— (e) —

True.

(b) Problem 2


— (a) —

   ( 6  − 3 )
det  2    4   = 6(4) − (− 3(2)) = 30

— (b) —

    ( − 5 2 )
det    6  1   =  − 5(1) − 2(6) = − 17

— (c) —

   ( 8    0 )
det  3  − 1   = 8(− 1)− 0(3) = − 8

(c) Problem 3


— (a) —

   (                )
det    i− 1 − 4i+ 1   = (i− 1)( − 3i +2) − ( − 4i +1)(2i+ 3) = 15i− 10
      2i+ 3 − 3i+ 2

— (b) —

   ( − 2i+ 5  4i+ 6 )
det     i− 3     7i   = (− 2i+ 5)(7i)− (4i+ 6)(i− 3) = 41i+ 36

— (c) —

    ( 2i  3 )
det    4  6i   = (2i)(6i)− (3)(4) = − 24

(d) Problem 4


— (a) —

For u = (3, − 2) and v = (2,5), the area of the parallelogram determined by u and v is

||   (   3  2 )||
||det  − 2  5  || = |19| = 19

— (b) —

For u = (1,3) and v = (− 3,1), the area of the parallelogram determined by u and v is

|             |
||   ( 1  − 3 )||
|det  3    1  | = |10| = 10

— (c) —

For u = (4, − 1) and v = (− 6, − 2), the area of the parallelogram determined by u and v is

|   (         ) |
||       4  − 6  ||
|det  − 1  − 2  | = |− 14| = 14

— (d) —

For u = (3,4) and v = (2, − 6), the area of the parallelogram determined by u and v is

|   (       ) |
||det   3   2   ||= |− 26| = 26
|     4  − 6  |

4 Section 4.2


(a) Problem 1


— (a) —

False. Not closed under addition since the n × n definition is equivalent to the two-by-two in the two dimensional case:

   (      )      (         )
det   1  0   + det   − 1  0    = 1+ 1 = 2
     0  1            0  − 1

but

   ((  1  0)    ( − 1   0 ) )      ( 0  0 )
det    0  1   +     0  − 1    = det   0  0   = 0

— (b) —

True. Theorem 4.4 [1, p. 215]

— (c) —

True. Corollary to Theorem 4.4 [1, p. 215]

— (d) —

True. Theorem 4.5 [1, p. 216]

— (e) —

False. det(B) = k det(A)

— (f) —

False. det(B) = det(A)

— (g) —

False. A is invertible and therefore has nonzero determinant.

— (h) —

True.

(b) Problem 2


A three can be taken out of each row while keeping the other rows fixed, so since there are three rows and each is a multiple of three, we have k = 33 = 9.

(c) Problem 3


We can add 5
7 times the third row to the second and the determinant will remain unchanged. This results in
(               )
  2a1  2a2  2a3
(  3b1  3b2  3b3 )
   7c1  7c2  7c3

which we can remove the multiple from each row one at a time, and k will be their product, 2(3)(7) = 42

(d) Problem 4


Subtract rows two and three from the first to get
(                        )
    − 2a1   − 2a2   − 2a3
(  a1 + c1 a2 + c2 a3 +c3 )
   a1 + b1 a2 + b2 a3 +b3

which does not change the determinant. We can then add one-half of row one to both rows two and three, and this will not change the determinant. This results in the following matrix.

(                    )
   − 2a1 − 2a2 − 2a3
(     c1    c2    c3 )
      b1    b2    b3

Now we simply take the 2 out of the first row and then swapping the second and third rows makes for a k of 2.

(e) Problem 6


    (   1  0  2)        (       )       (        )       (        )
det (   0  1  5)  = 1det   1  5   + 0det    0  5   + 2det    0  1   =  − 13
      − 1  3  0            3  0            − 1 0            − 1 3

(f) Problem 10


   (   i  i+ 2      0 )          (              )       (          )        (         )
det(  − 1    3      2i)  = − 1det   i+ 2      0   +3 det   i      0  + 2idet   i  i+2    = 2i+ 4
       0    − 1 − i+ 1               − 1 − i+ 1           0  − i+ 1            0   − 1

(g) Problem 12


Since adding a multiple of one row to another does not alter the determinant of a matrix, we can make the process of finding the determinant easier by zeroing out the rows of column one below the first, then the rows of column two below the second, and so forth in order to obtain an upper triangular matrix.
(   1  − 1  2  − 1 )    (  1 − 1   2  − 1 )    ( 1  − 1    2  − 1 )    ( 1  − 1  2  − 1 )
| − 3   4   1  − 1 |    |  0   1   7  − 4 |    | 0    1    7  − 4 |    | 0   1   7  − 4 |
|(   2  − 5 − 3   8 |) ⇝  |(  0 − 3  − 7  10 |) ⇝  |( 0    0   14  − 2 |) ⇝  |( 0   0  14  − 2 |)
  − 2   6  − 4   1         0   4   0  − 1        0    0  − 28 15         0   0   0   11

With this, the determinant is simply 14(11) = 154.

(h) Problem 14


Expanding across the third row.
   (  2  3 4 )        (       )
det(  5  6 0 )  = 7det   3  4   = − 168
      7  0 0             6  0

(i) Problem 18


Add the first row to the second row to get
(  1 − 2   3 )
(  0   0  − 2 )
   3 − 1   2

and now we can easily expand across the second row since the determinant has been unchanged.

   (             )
      1  − 2   3            (        )
det(  0   0  − 2 ) =  − 2 det 1  − 2   = 10
      3  − 1   2              3  − 1

(j) Problem 20


Adding 3 times the second row to the first and 1 i times the second to the third does not change the determinant and results in
(                   )
   3i− 4 − 2i+ 2  0
( − i+ 1       i  1 )
       i    i+ 3  0

which means that

   (                   )
      3i− 4  − 2i+ 2 0          (  3i− 4 − 2i+ 2 )
det(  − i+ 1       i 1 )  = 1det       i    i+ 3   = − 3i+ 17
          i    i+ 3  0

(k) Problem 21


Adding -4 times the second row to the third and -3 times the second row to the fourth, we have.
(   1  0  − 2  3 )
| − 3  1   1   2 |
|(  12  0  − 5 − 7 |)
   11  0  − 3 − 5

which means that

   (                 )
        1 0  − 2   3          (             )
   ||  − 3 1    1   2 ||        (   1  − 2  3 )
det(   12 0  − 5  − 7 ) = 1det   12  − 5 − 7   = 95
       11 0  − 3  − 5            11  − 3 − 5

(l) Problem 26


For A Mn×n(F) det(A) = (1)n det(A) since we would extract a negative one from each row, one at a time, and resulting in the product of n ones. So given this, det(A) = det(A) when n is an even number to make (1)n be one.

(m) Problem 30


The matrix B can be obtained from A by swapping a1,an then a2,an1, and so forth. Since swapping rows negates the resulting determinant and there are, in this case, n−1
 2 swaps for odd n and n
2 for even, then
            n−21
det(B ) = (− 1) det(A)

if n is odd, and

det(B ) = (− 1)n2 det(A)

if n is even.

5 Section 4.3


(a) Problem 1


— (a) —

False. Multiplying a row of the identity matrix by a scalar will result in the determinant of the resulting matrix being multiplied by the same scalar since the determinant is linear in a given row.

— (b) —

True. This is Theorem 4.7 [1, p. 223]

— (c) —

False. A matrix is invertible if and only if its determinant is nonzero.

— (d) —

True. If a n × n matrix has full rank then it is invertible, and therefore has nonzero determinant.

— (e) —

False. The n×n identity is equal to its transpose and can therefore they have equal determinant; they are not additive inverses.

— (f) —

True. The determinant of a matrix can be evaluated using cofactor expansion along any row and the determinant of a matrix is equal to the determinant of its transpose.

— (g) —

False. The coefficient matrix must have nonzero determinant since we will be dividing by the determinant.

— (h) —

False. Let

    (  1 1 )
A =    0 1

and b = (1,1)t. This system then has a unique solution of x1 = 0 and x2 = 1, but with this alternate definition of Mk,

     (      )
       1  1
M2 =   1  1

which would imply that x2 = 0, which is false.

(b) Problem 2


In this case
    (         )        (        )           (        )
A =   a11  a12  , M1 =   b1  a12   , and M2 =  a11  b1
      a21  a22           b2  a22               a21  b2

so

     b1a22 − a12b2          a11b2 − b1a21
x1 = a11a22 −-a12a21 and x2 = a11a22 −-a12a21

(c) Problem 4


We have that
     (            )       (              )        (            )           (             )
       2   1  − 3              1   1  − 3           2    1  − 3               2   1    1
A =  ( 1  − 2   1 ) , M1 = (   0 − 2   1 ) , M2 = ( 1    0   1 ) , and M3 = ( 1  − 2   0 )
       3   4  − 2            − 5   4  − 2           3  − 5  − 2               3   4  − 5

then since det(A) = 25 we have

                        (   1    1  − 3 )
x1 = −125 det(M1) = −125 det( 0  − 2   1 ) =  − 1
                           − 5   4  − 2
                         (            )
     -1-          -1-    ( 2    1  − 3 )     6
 x2 = − 25 det(M2 ) = − 25 det 1    0   1    = − 5
                         ( 3  − 5  − 2 )
                           2    1   1
 x3 = −125 det(M3 ) = −125 det ( 1 − 2  0 )  = − 75
                           3    4  − 5
Confirming our solution we see that
       (                )      (             )
         2    1 − 3   1      1   5  0  0  − 5
(A |b) = ( 1  − 2   1   0 )  ⇝ 5 ( 0  5  0  − 6)
         3    4 − 2  − 5         0  0  5  − 7

(d) Problem 9


The determinant of an upper triangular matrix is the product of its diagonal entries, thus in order for an upper triangular n × n matrix to be invertible, i.e. have nonzero determinant, all entries on the diagonal must be nonzero.

(e) Problem 10


Let M be an n × n nilpotent matrix. Then there exists a k + such that Mk = 0. This implies that
◜---------◞k◟---------◝
det(M )det(M )⋅⋅⋅det(M ) = det(M k) = det(0) = 0

and therefore det(M) = 0.

(f) Problem 12


Let a square matrix Q be orthogonal. Then QQt = I and therefore
1 = det(I) = det(QQt) = det(Q) det(Qt) = det(Q)2

so det(Q) must be ±1.

(g) Problem 14


Let β = {u1,,un} and B be an n×n square matrix with the jth column of B being uj. If β were to be a basis for Fn, then the columns of B would be linearly independent of each other, i.e. B would have full rank. Thus B would be invertible and therefore have nonzero determinant. Conversely, if B has nonzero determinant then it’s invertible which implies its rank is n. Since n is the number of columns which B has, then the columns are linearly independent, i.e. the vectors in β are linearly independent. Since there are n vectors in β, then it’s a basis for Fn.

(h) Problem 20


Let M Mn×n(F) be such that for some A Mk×k(F ) and B Mnk×nk(F)
     (         )
M  =   A     B
        0 In−k

Then the determinant is

             (           )
det(M ) = 1det  A     Brc1
                0  In− k− 1

where Brc1 is B without the rightmost column. Then we get

               (  A    Brc2 )
det(M ) = 1(1)det   0  In−k−2

where Brc2 is B without the two rightmost columns. This can be done a total of n k times until

        ◜--n−◞k◟--◝   (             )
det(M ) = 1(1)⋅⋅⋅(1)det  A  Brc(n−k)
                       0       I0

is reached, but the matrix on the right hand side above is just A, so det(M) = det(A).

(i) Problem 22


— (a) —

         (                                      )
[T ]γβ  =    [T(1)]γ  [T (x)]γ  [T (x2)]γ  ⋅⋅⋅  [T(xn)]γ
      =  ( [(1,⋅⋅⋅,1)]γ  [(c0,⋅⋅⋅,cn)]γ  [(c20,⋅⋅⋅,c2n)]γ  ⋅⋅⋅  [(cn0,⋅⋅⋅,cnn)]γ )
         (          2       n )
         |  1  c0  c02 ⋅⋅⋅  c0n |
      =  ||  1.  c1.  c1. ⋅⋅⋅  c1. ||
         (  ..   ..   ..  ...   .. )
            1  cn  c2n  ⋅⋅⋅  cnn

— (b) —

In the previous homework, we proved that T here is an isomophism (exercise 22 section 2.4). Due to this, M is an invertible matrix, and thus det(M)0.

— (c) —

In the case of n = 2, 0i<j1(cj ci) = c1 c0 is indeed the determinant for the two-by-two Vandermonde matrix

  ( 1  c  )
=   1  c0
        1

So assume that the Vandermonde determinant formula holds for all k less than n. By the

           (  1  c0  c2 ⋅⋅⋅  cn )
           |  1  c1  c02 ⋅⋅⋅  c0n |
det(M ) = det|| .   .   1.  .    1. ||
           (  ..   ..   ..  ..   .. )
              1  cn  c2n  ⋅⋅⋅  cnn

then subtracting the first row from the others

           (                  2             n−1       n )
           |  1      c0   2  c02 ⋅⋅⋅   n−1  c0n−1  n   c0n |
det(M ) = det|| 0  c1 − c0 c1 − c0 ⋅⋅⋅ c1  − c0    c1 − c0 ||
           (  ...       ...       ...  ...           ...       ... )
              0  cn − c0 c2n − c20 ⋅⋅⋅  cnn−1− cn0−1  cnn − cn0

for each column except the leftmost, subtracting c0 times the column to its immediate left

           (                                                     )
              1       0          0  ⋅⋅⋅           0             0
           ||  0  c1 − c0 (c1 − c0)c1 ⋅⋅⋅ (c1 − c0)cn1− 2 (c1 − c0)cn−1 1||
det(M ) = det|(  ...      ...          ...  ...            ...            ...|)
              0  c − c   (c  − c)c   ⋅⋅⋅ (c − c )cn− 2 (c − c )cn− 1
                  n   0   n   0  n        n   0 n      n   0  n

and then factoring out the cj c0 terms of each row j, we are left with

                        (  1  0   0 ⋅⋅⋅     0    0 )
         (∏n        )   ||  0  1  c1 ⋅⋅⋅  cn1−2  cn1− 1||
det(M ) =     (ci − c0) det| ..  ..   ..  ..     ..     ..|
          i=1           (  .  .   .   .   n−.2  n− .1)
                           0  1  cn  ⋅⋅⋅  cn    cn

The matrix resulting from removing the first row and column is an n 1 size Vandermonde matrix so we get

         ( n        ) (                )
det(M ) =  ∏ (c − c)  (    ∏    (c − c)) =   ∏    (c − c )
           i=1  i  0     1≤i<j≤n−1  j   i    0≤i<j≤n  j   i

by our induction hypothesis.

6 Section 5.1


(a) Problem 1


— (a) —

False. The identity matrix has only one eigenvalue, 1.

— (b) —

True. Since the eigenspace of an eigenvalue is a vector space it must be closed under scalar multiplication which means that the space has infinite cardinality when the field in question is .

— (c) —

True. Any zero matrix has no eigenvectors.

— (d) —

True. Eigenvalues are nonzero by definition.

— (e) —

False. A multiple of an eigenvector is another eigenvector, so one must only choose two eigenvectors that are multiples of each other for a counter-example.

— (f) —

False. Regarding the characteristic polynomial, this would imply that the sum of two roots of a polynomial is also a root, which is not the case.

— (g) —

False. The identity operator has an eigenvalue of one on ANY vector space.

— (h) —

True. Theorem 5.1 [1, p. 246].

— (i) —

True. Let A and B be similar matrices. Then there is a Q such that A = Q1BQ. From this we get

(tI − A) = (tI − Q −1BQ ) = (tQ−1Q − Q− 1BQ ) = Q−1(tI − B )Q

which informs us that the matrices (tI A) and (tI B) are similar. Since similar matrices have the same determinant and these two matrices are responsibile for the characteristic polynomial, then A and B have the same eigenvalues.

— (j) —

False. The matrices

( 1  0 )     (  0 − 1 )
  1  1   and    1   2

are similar to each other by

    (      )
       1 1
Q =    0 1

and also have the same single eigenvalue of 1, but the first has an eigenvector of (0,1)t and the second (1,2)t.

— (k) —

False. If v is an eigenvector for some matrix, then so is v since the eigenspace of an eigenvalue is a vector space, but the sum of these two vectors is zero, which is not an eigenvector, by definition.

(b) Problem 2 (b), (e), and (f)


— (b) —

Because [T]β is a diagonal matrix, then β is a basis of eigenvectors of T.

      (                        )   (                     )   ( − 2   0 )
[T]β =  [T (3+ 4x)]β  [T(2+ 3x)]β   =   [− 6− 8x]β  [− 6 − 9x]β  =     0  − 3

— (e) —

Because [T]β is not diagonal, then β is not a basis consisting of eigenvectors of T.

         (                                               )
[T]β  =    [T (x3 − x+ 1)]β  [T(x2 + 1)]β [T(1)]β  [T(x2 + x)]β
      =  ( [− x3 + x− 1]β [x3 − x2 − x]β [x2]β  [− x2 − x ]β )
         (                  )
         |  − 1   1   0   0 |
      =  |(   0  − 1   1   0 |)
             0    0  − 1  0
             0    0   0  − 1

— (f) —

Because [T]β is a diagonal matrix, then β is a basis of eigenvectors of T.

         ( [  (      ) ]   [ (        ) ]   [ (      ) ]   [  (       ) ]  )
[T]β =      T   1  0        T   − 1 2        T   1  0       T   − 1 0
         ( [ (  1  0 ) ]β  [(    0  0)]  β[ (    2 )0]  β[ (       0) 2]  ) β
               − 3 0          − 1  2          1  0          − 1 0
     =         − 3 0    β       0  0   β      2  0    β       0 2    β
         (              )
         |  − 3 0  0  0 |
     =   |(   0  1  0  0 |)
             0  0  1  0
             0  0  0  1

(c) Problem 3


— (a) —

               (            )
det(tI − A) = det t − 1   − 2  = t2 − 3t− 4
                   − 3 t− 2

This has roots of 4 and 1 which are the eigenvalues. For these eigenvalues we have

        (         )    (      2 )                (         )    (      )
4I − A =    3  − 2   ⇝    1  −3    and − 1I − A =   − 2 − 2   ⇝    1 1
           − 3  2         0   0                     − 3 − 3        0 0

and therefore E4 = Span{(2,3)t} and E1 = Span{(1,1)t}. Since there are two distinct eigenvalues, (2,3)t,(1,1)t are linearly independent and will form a basis. Thus

    (  2  − 1 )          −1      ( 4   0 )
Q =    3   1   and D = Q   AQ =    0  − 1

— (b) —

                (                )
                    t     2     3
det(tI − A) = det (  1  t− 1     1)  = t3 − 6t2 + 11t− 6
                  − 2   − 2  t− 5

This has roots of 1, 2, and 3 which are the eigenvalues. For these eigenvalues we have

         (             )     (         )
             1   2   3         1  0  1
1I − A = (   1   0   1 )  ⇝  ( 0  1  1 )
           − 2  − 2 − 4        0  0  0

        (   2    2   3 )    (  1  1 0 )              (   3    2   3 )    (  1  0 1 )
2I − A = (  1    1   1 ) ⇝  (  0  0 1 )  and 3I − A = (  1    2   1 ) ⇝  (  0  1 0 )
           − 2 − 2 − 3         0  0 0                   − 2 − 2 − 2         0  0 0

and therefore E1 = Span{(− 1, − 1,1)t} E2 = Span{(− 1,1,0)t} E3 = Span{(− 1,0,1)t}. Since there are two distinct eigenvalues, (− 1, − 1,1)t,(− 1,1,0)t,(− 1,0,1)t are linearly independent and will form a basis. Thus

    (  − 1 − 1  − 1 )                  ( 1  0  0 )
Q = (  − 1   1   0 )  and D = Q −1AQ = ( 0  2  0 )
        1    0   1                       0  0  3

— (c) —

               (            )
                  t− i  − 1      2
det(tI − A ) = det  − 2 t+ i   = t − 1

This has roots of 1 and 1 which are the eigenvalues. For these eigenvalues we have

          (              )    (      1   1 )             (              )    (      1   1 )
− 1I − A =   − i− 1   − 1   ⇝    1  − 2i+ 2   and 1I − A =   − i+ 1   − 1  ⇝    1  − 2i− 2
               − 2  i− 1        0        0                     − 2 i+ 1        0        0

and therefore E1 = Span{(i− 1,2)t} and E1 = Span{(i+ 1,2)t}. Since there are two distinct eigenvalues, (i− 1,2)t,(i+ 1,2)t are linearly independent and will form a basis. Thus

    (            )                   (       )
Q =   i − 1 i+ 1    and D = Q −1AQ =   − 1 0
         2     2                         0 1

— (d) —

                ( t− 2     0     1 )
det(tI − A) = det (  − 4 t− 1     4 ) = t3 − 2t2 + t
                    − 2    0  t+ 1

This has roots of 1 and 0, but the first has multiplicity of 2. For these eigenvalues we have

         ( − 1  0  1)     ( 1  0  − 1)              (  − 2   0  1)     ( 1  0  − 12 )
1I − A = ( − 4  0  4)  ⇝  ( 0  0    0)  and 0I − A = ( − 4 − 1  4)  ⇝  ( 0  1  − 2 )
           − 2  0  2        0  0    0                  − 2   0  1        0  0    0

and therefore E1 = Span{(0,1,0)t,(1,0,1)t} and E0 = Span{(1,4,2)t}. These eigenvectors will all be linearly independent and form a basis resulting in the following.

    (  0  1  1 )                   ( 1  0  0 )
Q = (  1  0  4 ) and D = Q −1AQ =  ( 0  1  0 )
       0  1  2                       0  0  0

(d) Problem 4 (a), (f), (i), and (j)


— (a) —

Taking E to be

{(1,0), (0,1)}

the representation of T in the standard basis:

                                                      (        )
      (                 )   (                     )      − 2 3
[T ]E =   [T (e1)]E  [T(e2)]E   =  [(− 2,− 10)]E [(3,9) ]E  =    − 10 9

then the characteristic polynomial is

                  (            )
det(tI − [T ]E) = det  t+ 2    − 3  = t2 − 7t+ 12 = (t− 4)(t− 3)
                      10  t− 9

which has roots 3 and 4, and thus these are the eigenvalues. Since

( t+ 2    − 3 )     (  5  − 3 )    ( 1  − 3 )
    10  t− 9   (3) =  10  − 6   ⇝    0    50

and

( t+ 2    − 3 )     (  6  − 3 )    ( 1  − 1 )
    10  t− 9   (4) =  10  − 5   ⇝    0    20

then E3 = Span{(3,5)} and E4 = Span{(1,2)}. So

β = {(3,5), (1,2)}

— (f) —

Taking E to be

{1,x,x2,x3}

the representation of T in the standard basis is

         (                                  )
[T]E  =     [T(e1)]E  [T(e2)]E  [T (e3)]E  [T (e4)]E
     =   ( [x + 1]E  [3x ]E  [x2 + 4x]  [x3 + 8x] )
         (            )          E          E
         |  1  0 0  0 |
     =   |(  1  3 4  8 |)
            0  0 1  0
            0  0 0  1
then the characteristic polynomial is
                 (                        )
                 |  t− 1     0    0     0 |
det(tI − [T ]E) = det|(  − 1 t − 3   − 4   − 8 |) = t4 − 6t3 + 12t2 − 10t+ 3 = (t− 3)(t− 1)3
                       0     0 t− 1     0
                       0     0    0  t− 1

which has roots 3 and 1, and thus these are the eigenvalues. Since

(  t− 1     0     0     0 )      (   2  0   0    0)     ( 1  0  0  0 )
|    − 1 t− 3   − 4   − 8 |      | − 1  0  − 4 − 8|     | 0  0  1  0 |
|(     0     0  t− 1     0 |) (3) = |(   0  0   2    0|)  ⇝  |( 0  0  0  1 |)
      0     0     0  t− 1            0  0   0    2        0  0  0  0

and

( t− 1     0     0     0 )      (   0    0   0   0 )    (  1  2  4  8 )
|   − 1 t− 3    − 4  − 8 |      |  − 1 − 2 − 4  − 8 |   |  0  0  0  0 |
|(    0     0  t− 1     0 |) (1) = |(  0    0   0   0 |)  ⇝ |(  0  0  0  0 |)
     0     0     0  t− 1            0    0   0   0         0  0  0  0

then E3 = Span{x} and E1 = Span{x 2,x2 4,x3 8}. So

β = {x,x − 2,x2 − 4,x3 − 8}

— (i) —

Taking E to be

{ (      )  (      )  (      )  (      ) }
     1 0       0 1       0 0       0 0
     0 0   ,   0 0   ,   1 0   ,   0 1

the representation of T in the standard basis is

         (                                   )
[T]E  =  ( [T[ ((e1)]E  [T)(e]2)]E[([T(e3)]E) ][T (e4[)(]E    ) ]   [(      ) ]  )
               0  0          0  0          1  0          0  1
      =        1  0    E     0  1    E     0  0    E     0  0    E
         (  0  0  1  0)
         |  0  0  0  1|
      =  |(  1  0  0  0|)
            0  1  0  0
then the characteristic polynomial is
                 (   t   0  − 1   0 )
                 |   0    t   0  − 1 |
det(tI − [T ]E) = det|( − 1  0    t   0 |)  = t4 − 2t2 + 1 = (t − 1)2(t+ 1)2
                     0  − 1   0   t

which has roots 1 and 1, and thus these are the eigenvalues. Since

(   t   0  − 1   0 )        (  − 1   0  − 1  0 )     ( 1  0  1  0 )
|   0   t   0  − 1 |        |   0  − 1   0  − 1 |    | 0  1  0  1 |
|( − 1   0    t   0 |) (− 1) = |( − 1   0  − 1  0 |)  ⇝  |( 0  0  0  0 |)
    0  − 1  0    t              0  − 1   0  − 1        0  0  0  0

and

(                  )       (                  )    (               )
     t   0  − 1  0             1   0  − 1   0        1  0  − 1   0
||   0    t   0  − 1 ||      ||   0   1   0  − 1 ||    || 0  1    0  − 1 ||
(  − 1   0   t   0 ) (1) = ( − 1   0   1    0 ) ⇝  ( 0  0    0   0 )
    0  − 1   0    t            0  − 1  0    1        0  0    0   0

then E1 = Span{ (       )  (        ) }
    − 1  0  ,   0  − 1
      1  0      0   1 and E1 = Span{ (      )  (      ) }
    1  0   ,  0  1
    1  0      0  1. So

    { (       )  (        )  (      )  (      )}
β =     − 1 0   ,  0  − 1  ,   1  0  ,   0  1
         1  0      0    1      1  0      0  1

— (j) —

Taking E to be

{ (      )  (      )  (      )  (      ) }
     1 0   ,   0 1   ,   0 0   ,   0 0
     0 0       0 0       1 0       0 1

the representation of T in the standard basis is

         (                                   )
[T]E  =  ( [T[ ((e1)]E  [T)(e]2)]E[([T(e3)]E) ][T (e4[)(]E    ) ]   [(      ) ]  )
      =        3  0          0  0          0  1          2  0
               0  2    E     1  0    E     0  0    E     0  3    E
         (  3  0  0  2)
         |  0  0  1  0|
      =  |(  0  1  0  0|)
            2  0  0  3
then the characteristic polynomial is
                  ( t− 3    0   0   − 2 )
                  ||    0    t  − 1    0 ||    4    3    2                    2
det(tI − [T ]E) = det (   0  − 1   t     0 ) = t − 6t + 4t + 6t− 5 = (t− 5)(t− 1)(t+ 1)
                      − 2   0   0  t− 3

which has roots 5, 1, and 1, and thus these are the eigenvalues. Since

(  t− 3   0    0   − 2 )      (   2   0   0  − 2 )    (  1 0  0  − 1)
||     0   t  − 1    0 ||       ||   0   5  − 1   0 ||    ||  0 1  0    0||
(     0  − 1   t    0 ) (5) = (   0  − 1  5    0 ) ⇝  (  0 0  1    0) ,
    − 2   0    0 t − 3          − 2   0   0    2         0 0  0    0

(                     )         (                  )    (            )
   t− 3   0   0    − 2            − 4   0    0 − 2         1  0 0  0
||     0   t  − 1    0 || (− 1) = ||   0  − 1 − 1   0 || ⇝  ||  0  1 1  0 ||
(     0  − 1   t    0 )         (   0  − 1 − 1   0 )    (  0  0 0  1 )
    − 2   0   0  t− 3             − 2   0    0 − 4         0  0 0  0

and

(                      )      (                  )    (              )
|  t− 3   0    0   − 2 |      | − 2   0    0 − 2 |    |  1  0   0  1 |
|(     0    t − 1     0 |) (1) = |(   0   1  − 1   0 |) ⇝  |(  0  1 − 1  0 |)
      0  − 1   t     0            0  − 1   1   0         0  0   0  0
     − 2  0    0  t− 3          − 2   0    0 − 2         0  0   0  0

then E5 = Span{ ( 1  0 ) }
    0  1, E1 = Span{ ( 0  − 1 )}
    1    0, and E1 = Span{ ( 0  1 )  ( − 1  0 )}
    1  0   ,    0  1. So

    { (      )  (       )  (      )  (        )}
β =     1  0  ,   0  − 1  ,  0  1   ,  − 1  0
        0  1      1   0      1  0        0  1

(e) Problem 11


— (a) —

Let A be similar to a scalar matrix λI. Then there exists a Q such that A = Q1(λI)Q which implies that A = Q1(λI)Q = λQ1IQ = λQ1Q = λI.

— (b) —

Let A be a diagonolizable matrix with only one eigenvalue. Then A can be diagonalized in the basis of its eigenvectors. The diagonal matrix D to which it will be similar will have eigenvalues of A on its diagonal, but there is only one eigenvalue, so D is a scalar matrix. Thus since A is similar to it, then by the previous proof, A is a scalar matrix also.

— (c) —

The matrix

(      )
   1 1
   0 1

is not diagonalizable by the previous subproblem, since it has only one eigenvalue (its diagonals are the same) but is not a scalar matrix.

(f) Problem 12


— (a) —

Turns out that I proved this in problem 1(i) of this section... nice. Anyways, here it is again. Let A and B be similar matrices. Then there is a Q such that A = Q1BQ. From this we get

(tI − A) = (tI − Q −1BQ ) = (tQ−1Q − Q− 1BQ ) = Q−1(tI − B )Q

which informs us that the matrices (tI A) and (tI B) are similar, so their determinants are the same. Since the determinants of these two similar matrices define the characteristic polynomial for A and B respectively, then their characteristic polynomials are also the same.

— (b) —

For any two basis γ, β of a vector space V , a linear operator T will have that [T]β is similar to [T]γ by way of a simple change of coordinates. Given this and the previous proof, the characteristic polynomial of T is independent of the choice of basis.

(g) Problem 14


For a square matrix A we have the following.
det(λI − A ) =  det((λI − A )t)
            =   det((λI)t − At)
            =   det(λ(I)t − At)
                         t
            =   det(λI − A )
Thus A and its transpose have the same characteristic polynomial, and therefore their eigenvalues are the same.

(h) Problem 17


— (a) —

Because T is the transformation on Mn×n(F) which transposes matrices, then the only viable candidates for eigenvalues for λ in At = λA are 1 and 1. Since we know that symmetric and anti-symmetric matrices exist in Mn×n(F) then we know we can find an A such that At = A or At = A. Thus ±1 are the only eigenvalues for T.

— (b) —

The eigenvectors corresponding to 1 would be the set of all symmetric matrices and the eigenvectors corresponding to 1 would be all anti-symmetric matrices.

— (c) —

Since all symmetric and anti-symmetric matrices in M2×2(F) have the form

(  a b )      (   0  a′)
   b c    and    − a′ 0

respectively, then a basis for E1 will be

{ (      )  (      )  (      ) }
     1 0       0 0       0 1
     0 0   ,   0 1   ,   1 0

and for E1 will be

{(   0  1 ) }
    − 1 0

Thus we can use these four vectors as a basis to diagonalize T. We’ll call this basis β and let their ordering be the order in which they appear here. Confirming that this basis will diagonalize T we have the following.

         ( [  (      )]   [  (      ) ]   [  (      )]   [  (        )]  )
[T ]  =      T   1  0       T   0  0        T   0  1       T     0  1
  β             0  0   β       0  1    β       1  0   β       − 1  0   β
         ( [(  1 0 ) ]   [(  0  0) ]   [(  0  1 )]   [ ( 0  − 1 ) ] )
     =         0 0           0  1          1  0          1   0
         (            β)            β             β               β
           1  0  0   0
     =   || 0  1  0   0 ||
         ( 0  0  1   0 )
           0  0  0  − 1

— (d) —

Generalizing what we did in the previous problem, a basis for E1 would be

{Aij}1≤i≤j≤n

where Aji is a matrix of all zeros except for ones at position i,j and position j,i. And a basis for E1 would be

{Bij}1≤i<j≤n

where Bji is a matrix with 1 at i,j and 1 at j,i. Then β would simply be the union of these two sets.

References

[1]   Friedberg, S.H. and Insel, A.J. and Spence, L.E. Linear Algebra 4e. Upper Saddle River: Pearson Education, 2003.