Math 312: Linear Algebra

Homework 2
Lawrence Tyler Rush
<me@tylerlogic.com>

July 26, 2012
http://coursework.tylerlogic.com/math312/homework02

1


(a)


Let a,b F and A,B Mn×n(F). Then we have
               ∑n
tr(aA + bB )  =     (aA + bB )ii
               i=1
               ∑n
            =     (aAii + bBii)
               i=1n        n
            =  a∑  A  + b ∑  B
                 i=1  ii    i=1  ii

            =  atr(A)+ btr(B )
which gives us the linearity of trace.

(b)


Let w,w′∈ W. Then the trace of w and ware each zero. So for a,b F this yields the following by the linearity of trace.
tr(aw+ bw ′) = a tr(w) +b tr(w′) = a(0)+ b(0) = 0

So aw + bw′∈ W, and thus W is a subspace.

(c)


Matrices in Mn×n(F) will have the form
( a   b)
   c d

for a,b,c,d F. Matrices in W will have the above form with the restriction that a + d = 0, i.e. d = a, so actually, all elements of W will have the form

(        )
  a    b
   c − a  .

With this form, one basis (maybe call it the standard basis for W) would be

{ (       )  (      )  (      )}
    1   0   ,  0  1   ,  0  0
    0  − 1     0  0      1  0

since these matrices are linearly independent and

  ( 1   0 )    (  0 1 )    (  0 0 )   (  a   b )
a   0  − 1  + b   0 0   + c   1 0   =    c  − a

With this, W has dimension three.

2 Problems from §2.4


(a) Problem 1


For this problem, we’ll use the more intuitive subscript notation of the matrix representation of a linear operator from class and of Treil [2], rather than Friedberg, Insel, and Spence’s notation.

— (a) —

Generally false, since inverses are unique and the inverse of [T]βα is [T1]αβ by Theorem 2.8 [1, p. 101]. This could potentially be true if V , W, α, β, and T were chosen appropriately, say V = W and α = β.

— (b) —

True.

— (c) —

True since [T]βα[v]α = [T(v)]β for v V .

— (d) —

False. The space M2×3(F ) has dimension 6 and F5 has dimension 5.

— (e) —

True. Both Pn(F) and Pm(F) are over the same field and have the same dimension if and only n = m, so by Theorem 2.19 [1, p. 103], they are isomorphic.

— (f) —

False. The product of

(  1  0 0 )
   0  1 0   .

and

(  1  0 )
(  0  1 ) .
   0  0

will be the two-by-two identity matrix, but neither are invertible as they are not square matrices.

— (g) —

True since A1 satisfies the requirements of being an inverse of (A1)1,

A −1(A−1)−1 = (A− 1A)−1 = I and (A −1)−1A−1 = (AA −1)−1 = I

and inverses are unique [1, p. 100].

— (h) —

True by corollary two of theorem 2.18 [1, p. 102].

— (i) —

True. If A were an m × n matrix where mn, then there would not exist any matrix B such that AB = BA since AB and BA would have different dimensions.

(b) Problem 2


— (a) —

The linear transformation T is not invertible. The vector spaces 2 and 3 have dimension 2 and 3, respectively, so they are not isomophic to each other. Because they are not isomorphic, no linear transformation between them can be invertible; more specifically T cannot.

— (b) —

The reasoning of the previous problem stands here as well.

— (c) —

This linear transformation is invertible.

Let (a,b,c),(a,b,c) 3 be such that T(a,b,c) = T(a,b,c). Then (3a 2c,b,3a + 4b) = (3a′− 2c,b,3a+ 4b), which requires that b = b. Due to this, 3a + 4b= 4a+ 4bimplies that a = awhich in turn tells us that 3a′− 2c = 3a′− 2c, and thus c = c. So T is one-to-one.

Let (x,y,z) 3. Seeing that for v = (z−43y,y, 12(− x + z − 4y))

      (  (z-−-4y)    ( 1           )     ( z −-4y)    )
T(v) =  3    3    − 2  2(− x+ z − 4y) ,y,3   3    + 4y  = (z − 4y+ x − z + 4y,y,z − 4y+ 4y) = (x,y,z)

we have that T is onto. Because T is one-to-one and onto, then it is invertible.

— (d) —

P3() and P2() differ in dimension, so as before, there is no invertible linear transformation between them.

— (e) —

M2×2(ℝ) and P2() differ in dimension, so as before, there is no invertible linear transformation between them.

— (f) —

This is an invertible linear transformation. Define U : M2×2(ℝ) M2×2(ℝ) by the following.

(      )     (         )
   a  b  ↦− →   b  a − b
   c  d        c  d − c

With this definition we see

  (  ( a  b ) )     ( a+ b   a   )   ( a  (a+ b)− a )   ( a  b )
U  T    c d     = U     c   c+ d   =   c  (c+ d)− c   =    c d

and

  (  (      ) )    (         )   (                      )   (      )
       a  b           b a − b       b+ (a− b)     b           a  b
T  U    c d     = T   c d − c  =        c     c + (d − c)  =    c d

so U T = IV and T U = IW. Thus T is invertible.

(c) Problem 3


— (a) —

The vector spaces F3 and P3(F) are not isomorphic as they do not have the same dimension. The former has dimension 3 and the latter, 4.

— (b) —

The vector spaces F4 and P3(F) are isomorphic since they have the same dimension and are both over F.

— (c) —

The vector spaces M2×2(ℝ) and P3(F) are isomorphic since they have the same dimension and are both vector spaces over F.

— (d) —

The vector space V has a dimension of three, as seen in problem 1, but 4 has dimension four, so these two vector spaces are not isomorphic.

(d) Problem 4


Let A and B be n × n invertible matrices. Then B1A1 is square, and we see both
(B−1A −1)(AB ) = B−1(A−1A )B = B−1IB = B −1B = I

and

(AB )(B −1A− 1) = A−1(B −1B)A = A−1IA = A −1A = I

This implies that B1A1 is the inverse of AB. Moreover, (AB)1 = B1A1.

(e) Problem 5


Let A be an invertible matrix. Then A is square and thus so is its transpose. Seeing that
(A−1)tAt = (AA −1)t = It = I

and

 t  −1 t    − 1  t   t
A (A  ) = (A  A ) = I = I

we know then that At is invertible and furthermore that its inverse is (A1)t.

(f) Problem 6


Let A be an invertible matrix and B some matrix such that AB = 0. Then we have the following sequence of equations.
 −1  AB  =   0−1
A  (AB ) =   A  0
(A −1A)B  =   0
     IB  =   0

      B  =   0

(g) Problem 7


Let A be an n × n matrix.

— (a) —

Let A2 = 0. Assume for later contradiction that A is invertible. Then we have that AA1 = I, which implies that

A(AA −1)A −1 = AIA− 1
      − 1 −1     − 1
 (AA )A  A   = AA
      (0)A− 1A −1 = I
               0 = I
however, the identity and zero matrices are not equal; we have a contradiction. Thus A is not invertible.

— (b) —

If AB = 0 for some nonzero n×n matrix B then A cannot be invertible as being so would contradict our result of our proof for the previous problem.

(h) Problem 9


Let A and B be n × n matrices such that their product AB is invertible. Then LAB must be invertible as well. From this results
LALB (AB)−1 = LABL (AB)−1 = I

and

L    −1 L  = L    −1L   = I
 (AB)  A B    (AB)   AB

which informs us, respectively, that LA is onto and LB is 1 1. But since these two maps each have domain and codomain of Mn×1(F), namely a domain and codomain of equal dimension, then Theorem 2.5 [1, p. 71] gives us that LA and LB are bijections, that is they are invertible. Hence A and B are invertible too.

(i) Problem 14


Let’s look at the transformation T : V F3 defined by
(  x  y  )
   0  z cc ↦−→  (x,y− x,z)

which was derived in an attempt to “undo” the definition of V to get back into the space F3. Let a,b F and

(        )     (         )
  x  y            x′ y′
  0  z cc  and    0  z′ cc

be elements of V . By the following, we can see that T indeed belongs to L(V, F3).

  ( (  x  y )   (  x′ y′ ))        ( ax +bx′  ay+ by′)
T  a   0  z  + b   0  z′      =  T      0     az + bz′
                              =  (ax+ bx′,ay + by′ − (ax + bx′),az + bz′)
                                        ′            ′   ′       ′
                              =  (ax+ bx ,a(y − x )+ b(y − x ),az + bz )
                              =  (ax,a(y − x),az) +(bx′,b(y′ − x′),bz′)
                              =  a(x,y− x,z)+ b(x′,y′ − x′,z′)
                                    ( x  y )     (  x′ y′ )
                              =  aT   0  z   + bT   0  z′
Now let’s define the map U : F3 V by
          ( a  a+ b )
(a,b,c) ↦−→   0   c

With this definition, we have that for (a,b,c) F3

              (          )
                 a  a+ b
T(U (a,b,c)) = T   0   c     = (a,(a+ b)− a,c) = (a,b,c)

and for (      )
  x  y
  0  zV

  (  (      ))                 (               )   (      )
       x  y                      x  x + (y − x)       x  y
U  T   0  z    = U (x,y − x,z) =  0      z       =    0  z

Hence T U = IF3 and U T = IV , so T is an isomorphism from V to F3.

(j) Problem 16


Let B be an n × n invertible matrix and define Φ : Mn×n(F) Mn×n(F ) by Φ(A) = B1AB. Letting a,a′∈ F we see the following for A,A′∈ Mn×n(F).
Φ(aA + a′A ′) = B−1(aA + a′A ′)B = (aB−1A + a′B−1A ′)B = aB −1AB + a′B −1A′B = aΦ(A )+ a′Φ (A ′)

So Φ is linear. Define Φ: Mn×n(F) Mn×n(F ) by Φ(A) = BAB1. Given this definition, we see

   ′        (    −1)    −1 (    −1)    (  −1 )  (  −1 )
Φ(Φ (A)) = Φ BAB     = B   BAB     B =  B   B A  B   B  = IAI = A

and

Φ′(Φ (A )) = Φ′(B −1AB ) = B (B−1AB )B −1 = (BB − 1)A(BB − 1) = IAI = A

which together with the linearity of Φ yields to us that Φ is invertible.

(k) Problem 22


Let c0,,cn be distinct scalars from an infinite field F. Define the map T : Pn(F) Fn+1 by T(f) = (f(c0),,f(cn)). With this definition, we see that T is linear since for a,b F and f,g Pn(F)
T(af +bg) = ((af +bg)(c),...,(af + bg)(c )) = a(f (c ),...,f(c )) + b(g(c ),...,g(c )) = aT(f) +bT (g)
                      0              n        0        n        0       n

Remembering from the first chapter of our book, [1], that a set of Lagrange polynomials in Pn(F) is a basis, we, in an attempt to find an inverse for T, define U : Fn+1 Pn(F) using the Lagrange polynomials g0,,gn corresponding to c0,,cn by U(a0,,an) = a0g0 + ⋅⋅⋅ + angn. With this definition, we have that

U (T(f)) =   U(T(a g + ⋅⋅⋅+ a g ))
                  0 0        n n
         =   U((a0g0 + ⋅⋅⋅+ angn)(c0),...,(a0g0 + ⋅⋅⋅+ angn)(cn))
         =   U(a0,...,an)
         =   a0g0 + ⋅⋅⋅+ angn
         =   f
and
T(U (a ,...,a  )) = T(a f + ⋅⋅⋅+ a f ) = ((a f + ⋅⋅⋅+a f)(c ),...,(a f + ⋅⋅⋅+ a f )(c )) = (a ,...,a )
     0     n        00        n n      0 0       n n   0      0 0        n n  n      0     n

where the coordinates of f in the basis g0,,gn are a0,,an. From this we can see that T is invertible with inverse U. Thus, since T is linear, it’s an isomorphism.

3 Problems from §2.5


(a) Problem 1


— (a) —

False. The jth column of Q is [xj]β.

— (b) —

True. It is the matrix representation of the identity transformation, which is invertible.

— (c) —

False. If Q changes βcoordinates into β coordinates, then

[T ]β = [ITI]β = [I]ββ′[T]β′[I]β′β = Q −1[T ]β′Q

— (d) —

False. The matrices A,B Mn×n(F) are called similar if B = Q1AQ for some Q Mn×n(F).

— (e) —

True by Theorem 2.23 [1, p. 112]. In this case one can be retrieve from the other by the change of coordinate matrix.

(b) Problem 2


— (a) —

β = {e1,e2} and β= {(a1,a2),(b1,b2)}

       (                   )   ( a1  b1 )
[I]ββ′ =  [(a1,a2)]β  [(b1,b2)]β   =   a2  b2

— (b) —

β = {(1,3),(2,1)} and β= {(0,10),(5,0)}

       (                 )  (  4  1 )
[I]ββ′ =  [(0,10)]β  [(5,0)]β  =    2  3

— (c) —

β = {(2,5),(1,3)} and β= {e1,e2}

                               (       )
    ′  (                )   -1    3  1
[I]ββ =   [(1,0)]β  [(0,1)]β   = 11   − 5 2

— (d) —

β = {(4,3),(2,1)} and β= {(2,1),(4,1)}

                             (        )
[I]ββ′ = ( [(2,1)]β [(− 4,1)]β ) =  2  − 1
                               5  − 4

(c) Problem 6 only a and b


For each of these, the matrix A is [LA]ϵ where ϵ is the standard basis.

— (a) —

    (  1  3)         { ( 1 ) (  1 )}
A =    1  1   and β =    1   ,  2

          (  [(   )]   [(   ) ] )   (      )
                1         1            1 1
Q = [I]ϵβ =      1   ϵ     2    ϵ  =    1 2

            (         )
Q−1 = [I]βϵ =     2 − 1
              − 1   1

                              (   2  − 1 ) ( 1 3 )(  1  1 )   (  6   11 )
[LA ]β = [I]βϵ[LA ]ϵ[I]ϵβ = Q− 1AQ  =   − 1   1     1  1     1  2   =   − 2 − 4

— (b) —

    ( 1  2 )        { ( 1 )  (   1) }
A =   2  1   and β =    1   ,  − 1

          ( [(   ) ]  [(     ) ] )   (        )
Q = [I]  =      1           1       =    1   1
      ϵβ        1    ϵ     − 1   ϵ       1 − 1

             (        )
Q−1 = [I]βϵ = 1   1   1
            2   1  − 1

                              1(  1   1 ) ( 1  2 ) ( 1   1 )   ( 3    0 )
[LA ]β = [I]βϵ[LA]ϵ[I]ϵβ = Q−1AQ  = 2   1  − 1    2  1     1  − 1  =   0  − 1

4 §3.1 Problem 1


(a)


True. Performing an elementary row opertion does not change the dimensions of a matrix, and in this case we perform the operation on the n × n identity matrix.

(b)


This is generally false, since multiplying by any scalar of our choosing in the field over which the vector space lies is an elementary row operation, but if the field of scalars were 2 consisting of only zero and one, then this property would hold.

(c)


True. Perform the “multiply any row by a scalar” row operation on the identity matrix with scalar of one.

(d)


False. The resulting matrix here is not elementary.
(         ) (          )   (         )
   1  0 0      0  1  0       0  1  0
(  0  0 1 ) (  1  0  0 ) = ( 0  0  1 )
   0  1 0      0  0  1       1  0  0

(e)


True. This is Theorem 3.2 [1, p. 150].

(f)


False. We can again use the counter-example from problem (d). The sums of those two matrices is the 2 × 2 matrix of all ones.

(g)


True.
  1. Exchanging row or column i with j of In will put ones at the ij and ji positions, and zeros at the ii and jj positions of the resulting matrix.
  2. Multiplying rows and columns of the identity matrix by a nonzero scalar results in a symmtric matrix.
  3. Multiplying row or column i by a scalar a and adding it to j is the “transpose operation” of multiplying row or column j by a and adding it to i.

(h)


False. Let A be
(      )
   1 1
   0 0

and B be the matrix resulting from swapping the two rows of A. Since

(      ) (      ) (            )
  1  1     a  b     a + c b +d
  0  0     c  d       0     0

we cannot find a matrix that will equal B via left multiplication by A, left alone an elementary matrix.

(i)


True. Since elementary row operations are invertible and their inverses are elementary row operations, if B = EA then we have E1B = A and E1 is elementary.

5 Problems from §3.2


(a) Problem 2


We will use to denote the transformation of a matrix into an echelon form which gives us the necessary information about the rank.

— (a) —

The rank is two:

(          )   (           )
   1  1  0        1  0  − 1
(  0  1  1 ) ⇝ (  0  1   1 )
   1  1  0        0  0   0

— (b) —

The rank is three:

(         )    (         )
(  1  1 0 )    (  1 0  0 )
   2  1 1   ⇝     0 1  0
   1  1 1         0 0  1

— (c) —

The rank is two:

( 1  0  2 )   ( 1  0  2 )
  1  1  4  ⇝    0  1  2

— (d) —

The rank is one since the second row is just a multiple of the first row.

— (e) —

( 1  2   3  1  1 )    ( 1  0  0  0  0 )
| 1  4   0  1  2 |    | 0  2  3  1  1 |
|( 0  2  − 3 0  1 |) ⇝  |( 0  0  6  1  0 |)
  1  0   0  0  0        0  0  0  0  0

— (f) —

The rank is three:

(   1    2 0    1  1 )   (  1  2 0  1  0 )
||   2    4 1    3  0 ||   ||  0  0 1  1  0 ||
(   3    6 2    5  1 ) ⇝ (  0  0 0  0  1 )
   − 4 − 8 1  − 3  1        0  0 0  0  0

— (g) —

The rank is one since all rows are multiples of the first.

(b) Problem 3


Let A be an m × n matrix.

First assume that A has zero rank. This implies that LA also has zero rank, that is, it has nullity of n since it’s domain is Fn. So LA(v) = 0 for all v Fn, in other words Av= 0 for all v′∈ Mn×1(F), but only the zero matrix does this. Hence A is the zero matrix.

Conversely, let A be the zero matrix. This has rank zero since LA is the zero map, which has zero rank.

(c) Problem 4


— (a) —

Original

(              )
   1  1   1  2
(  2  0  − 1 2 )
   1  1   1  2

R2 2R1 R2
R3 R1 R3

( 1    1   1   2 )
( 0  − 2  − 3 − 2 )
  0    0   0   0

C2 C1 C2
C3 C1 C3
C4 2C1 C4

(                )
  1    0   0   0
( 0  − 2  − 3 − 2 )
  0    0   0   0

C3 32C2 C3
C4 C2 C4

(              )
   1   0  0  0
(  0  − 2 0  0 )
   0   0  0  0

−1
 2R2 R2

(  1  0 0  0 )
(  0  1 0  0 )
   0  0 0  0

So the rank is two.

— (b) —

Original

(   2  1 )
(  − 1 2 )
    2  1

R1 + 2R2 R1
R3 + 2R2 R3

(        )
    0  5
(  − 1 2 )
    0  5

Swap R1 and R2

(  − 1 2 )
(   0  5 )
    0  5

1R1 R1
1
5R2 R2
1
5R3 R3

(        )
   1  − 2
(  0   1 )
   0   1

R1 + 2R2 R1
R3 R2 R3

(      )
   1 0
(  0 1 )
   0 0

So the rank is two.

(d) Problem 7


We can row reduce A with the following steps
  1. R1 R2 R1
  2. R3 R2 R3
  3. Swap R1,R2
  4. 1
2R2 R2
  5. R3 R2 R3
  6. R1 R3 R1

The elementary row matrices resulting from these operations, refer to Ei as the elementary row matrices for the ith step above, can be inverted in order to obtain A as their product, since E6⋅⋅⋅E1A = I. So we get A = E11⋅⋅⋅E61. The inverses are (note E2 = E5):

      ( 1  1  0 )            ( 1  0  0 )       (  0 1  0 )       (  1  0  0)        ( 1  0  1 )
E−11 = ( 0  1  0 ) E −21,E−51 = ( 0  1  0 ) E−3 1= (  1 0  0 ) E −41= (  0  2  0) E −61 = ( 0  1  0 )
        0  0  1                0  1  1            0 0  1            0  0  1           0  0  1

So

    (  1 1  0 ) (  1  0 0 ) (  0  1  0 ) ( 1  0  0 ) ( 1  0  0 ) (  1  0 1 )
A = (  0 1  0 ) (  0  1 0 ) (  1  0  0 ) ( 0  2  0 ) ( 0  1  0 ) (  0  1 0 )
       0 0  1      0  1 1      0  0  1     0  0  1     0  1  1      0  0 1

(e) Problem 20


— (a) —

The reduced row echelon form of A is the following.

(  1  0 − 1  0  − 3 )
|  0  1   2  0  − 1 |
|(  0  0   0  1   2 |)
   0  0   0  0   0

which informs us that the rank of A is three, i.e. the rank of LA is three as well. This then tells us the nullity is 2 since the domain of the linear map is F5. Since LA will map any item of its nullspace to zero the a matrix M with columns (coordinates) of vectors only from the nullspace of LA will have that AM is zero. To boot, we know that we can find such an M with rank two because the dimension of the nullspace of LA is two, i.e. the maximum size of any linearly independent set. So we’ll choose the two vectors which correspond to the free variables from the reduced row echelon form of A above, namely

(     )      (     )
    1            1
||  − 2 ||     ||  − 2 ||
||   1 ||  and ||   1 ||
(   0 )      (   0 )
    0            0

So making M a matrix with all zero columns except for two columns which will be the columns immediately above will result in AM = 0.

Note that the ordering of the above columns in M is irrelevant since the matrix resulting from AM will have its ith column be Ami where mi is the ith column of M.

— (b) —

For any 5 × 5 matrix B with AB = 0, the columns of B must be contained within the nullspace of LA. Since that space has dimension two, then no linear independent set of vectors can have a size greater than two. This includes the columns of B, so rank(B) is two at most.

(f) Problem 21


Let A be an m × n matrix with rank m; denote by 0i×j the zero matrix in Mi×j(F). So there are m columns of A which are linearly independent. This implies that through elementary column operations, i.e. multiplication on the right by elementary matrices, we can obtain the matrix (Im 0m×(nm)) where Im is the m × m identity. Note that because A has m rows and is rank m, then n m, so our previous subtraction is valid. Letting E1,,Ek be the matrices pertaining to the column operations on A, we have AE1⋅⋅⋅Ek = (           )
 Im 0m× (n−m ), which yields
         (          )                 (          )
AE1 ⋅⋅⋅Ek      Im      = (Im 0m× (n−m ))      Im      = Im
           0(n−m)×m                     0(n− m)×m

So

            (    Im    )
B = E1 ⋅⋅⋅Ek   0(n−m)×m

is the n × m matrix we require.

(g) Problem 22


Let B be an n × m matrix with rank m. By the same reasoning regarding columns in the previous problem, we perform elementary row operations on B to gain the following result.
            (    Im    )
Ek ⋅⋅⋅E1B  =   0(n−m)×m

Multiplying both sides by (Im 0m× (n−m )) leaves us with

                                      (          )
(           )           (           )      Im
Im 0m ×(n− m) Ek⋅⋅⋅E1B =  Im 0m× (n−m )   0(n− m)×m   = Im

and thus the m × n matrix A we are looking for is (           )
 Im 0m×(n−m)Ek⋅⋅⋅E1.

6


Let A be a n×n matrix such that A2 = 0. Since the identity matrix, I, is also n×n, then I A is also n×n. Similarly I + A is square of dimension n. Taking a look at the product of these two sums, we get
                                    2             2
(I − A )(I +A ) = (I − A )I + (I − A)A = I − AI + IA − A = I − A +A − 0 = I

as well as

(I + A)(I − A ) = I(I − A) +A (I − A) = I2 − IA + AI − A2 = I − A + A − 0 = I.

So I + A is the inverse of I A.

7


(a)


Let b,b′∈ F and v,v′∈ V with the following expressions in B, v = a1β1 + ⋅⋅⋅ + anβn and v= a1β1 + ⋅⋅⋅ + anβn. For this we see that
β∗i(bv+ b′v′) = βi∗((ba1 + b′a′1)β1 + ⋅⋅⋅+ (ban + b′a′n)βn) = bai + b′a′i = bβ∗i(v)+ b′β∗i(v′)

and so βi is linear.

(b)


The dual V has dimension n since it is the space L(V, F), so we need only show that either B is linearly independent or that it generates V . We choose generation.

Let f V . Since f is completely determined by its action on a basis of V , like B, then we need express f in terms of B such that f(βi) = (a1β1 + ⋅⋅⋅ + anβn)(βi) where a1β1 + ⋅⋅⋅ + anβn is the yet-to-be-determined representaiton of f in B. Since βi(βj) = δij (the Kronecker-δ), then we simply need to set ai = f(βi) to obtain our desired property.

(c)


Let T : V W be a linear map. If we let a,b F and U,V W then we have the following.
  t                                        t       t
T (aU + bV) = (aU + bV) ∘T = aUT + bVT = aT (U )+ bT (V )

So Tt is linear.

(d)


Let B be a basis for V , C be a basis for W, and B, C be bases for V , W, respectively. Then for the linear map T : V W. The transpose of its coordinate matrix form B to C is ([T ]CB)t. Expanding we get
(  [T(β )]  )
|     .1 C |
(     ..    )
   [T (βn)]C

Given how we generate a linear functional with the dual basis C we get

(   ∗             ∗        )
   c1(T (β1))  ⋅⋅⋅ cm (T (β1))
|(      ...     ...     ...     |)
   c∗1(T (βn ))  ⋅⋅⋅ c∗m (T (βn ))

which can be rewritten making use of Tt’s definition

(                           )
  T t(c∗1)(β1)  ⋅⋅⋅ T t(c∗m)(β1)
|(     ...      ...      ...     |)
  T t(c∗)(β )  ⋅⋅⋅ T t(c∗)(β )
      1  n           m   n

but for the way we defined the dual basis, B, the ij entry of this matrix is ith coordinate of Tt(cj) in B. So the matrix is then

( [T t(c∗)]∗  ⋅⋅⋅ [Tt(c∗ )]∗ )
      1 B           m  B

which is just [Tt]BC

(e)


First assume that T is an isomorphism. Then the matrix [T]CB is invertible, which implies that its transpose is as well. However, its transpose is the matrix [Tt]BC. so this too is invertible. Thus Tt is an isomorphism since it’s linear.

Conversely, allow Tt to be an isomorphism. Then [Tt]BC is an invertible matrix. This is equal to the transpose of [T]CB and therfore T is an isomorphism since it is linear.

(f)


Define the map M : V V by M(a1β1 + ⋅⋅⋅ + anβn) = a1β1 + ⋅⋅⋅ + anβn. This definition demonstrates the map’s linearity as well as it’s surjective nature since B is a basis for V . Now let M(a1β1 + ⋅⋅⋅ + anβn) = M(a1β1 + ⋅⋅⋅ + anβn). Thus a1β1 + ⋅⋅⋅ + anβn = a1β1 + ⋅⋅⋅ + anβn, which can only be true if ai = aifor each i, since B is a basis for V . Hence M is injective. Given these three things, M is an isomorphism.

(g)


For L : V V ∗∗ defined by L(a1β1 + ⋅⋅⋅ + anβn) = a1β1∗∗ + ⋅⋅⋅ + anβn∗∗. This is simply the composition of two isomorphisms, namely M1 : V V and M2 : V V ∗∗ defined as in the previous problem. Thus L is an isomorphism.

(h)


Let a,b F and u,v be vectors in V . Letting G be any vector in V we get the following by its linearity (it’s in L(V, F), remember) and by the definition of EV itself.
EV (au+ bv)(G )  =  G (au + bv)
                =  aG (u) +bG (v)

                =  a(EV(u))(G)+ b(EV(v))(G )
                =  (aEV(u)+ bEV (v))(G)
So EV is linear.

(i)


Let a1β1 + ⋅⋅⋅ + anβn be a vector in V . Then for any G in V we get the following by the newly found linearity of EV .
EV (a1β1 + ⋅⋅⋅+ anβn)(G ) =  (a1EV (β1)+ ⋅⋅⋅+ anEV (βn))(G )
                        =   (a1EV (β1)(G )+ ⋅⋅⋅+ anEV (βn)(G ))
                        =   a G(β )+ ⋅⋅⋅+a G (β )
                             1   1        n    n
But for G(βi) is just the ith coordinate of G in its B representation, i.e. G(βi) is βi∗∗(G). So the rightmost term from the above equation becomes
   ∗∗             ∗∗
a1βi (G)+ ⋅⋅⋅+ anβn (G )

and thus EV = L.

(j)


Let a1β1 + ⋅⋅⋅ + anβn be a vector in V . Then for any G in W we get the following through the use of the various definitions of the maps involved as well as their linearity.
((Ttt ∘E  )(a β + ⋅⋅⋅+ a β ))(G ) =   (E  (a β + ⋅⋅⋅+ a β )∘T t)(G)
       V   1 1        n n            V  1 1        n n
                               =   EV(a1β1 + ⋅⋅⋅+ anβn)(G∘ T)
                               =   a1(G ∘ T)(β1) +⋅⋅⋅+ an(G ∘T )(βn)
                               =   EW (a1T (β1)+ ⋅⋅⋅+ anT (βn))(G)
                               =   ((E   ∘T )(a β + ⋅⋅⋅+ a β ))(G )
                                      W      1 1        n n
So Ttt EV = EW T.

References

[1]   Friedberg, S.H. and Insel, A.J. and Spence, L.E. Linear Algebra 4e. Upper Saddle River: Pearson Education, 2003.

[2]   Sergei Treil. Linear Algebra Done Wrong. Freely available at
http://www.math.brown.edu/~treil/papers/LADW/LADW.html