Math 55a: Honors Abstract Algebra

Homework 4
Lawrence Tyler Rush
<me@tylerlogic.com>

September 1, 2012
http://coursework.tylerlogic.com/math55a/homework04

1


Allow V to be an F-vector space with projective space, PV .

(i) Points and lines in

P
V


— (a) —

Let U1 and U2 be distinct “points” of the projective space PV . Because they’re distinct, then their intersection is simply the zero vector since they are each one dimensional. This gives us

dim(U1 + U2 ) = dim U1 + dim U2 - dim (U1 ∩ U2) = 1 + 1- 0 = 2.

So any subspace W containing both U1 and U2 will have dimension of at least 2, since it will necesarily contain U1 + U2. In particular if W has dimension 2, then W is the space U1 + U2, and so the unique line containing both U1 and U2 is U1 + U2.

— (b) —

Let U1 and U2 be distinct lines both contained in some projective plane, say W. Being lines, dimU1 = dimU2 = 2, but since they are distinct then dim(U1 + U2) > 2. Now since dimW = 3, then dim(U1 + U3) 3 and therefore dim(U1 + U2) is squeezed to be 3. Thus we have

dim(U1 + U2) = dim U1 + dim U2 - dim(U1 ∩U2) = 2+ 2- 3 = 1

and therfore U1 U2 is a point.

(ii)


(iii)


2 Axler: Chapter 5, Exercise 4


Problem Statement. Suppose that S,T L(V ) are such that ST = TS. Prove that ker(T - λI) is invariant under S for every λ F.

Problem Solution. Let v be in the kernel of T - λI. Therefore we have that for any v ker(T - λI),

(T - λI)v = 0

Applying S to both sides of the above equation yields the following sequence.

 S(T - λI )v =   0
(ST - λS )v  =   0
(T S - λS )v =   0

 (T - λI)Sv  =   0
Hence ker(T - λI) is invariant under the S operator.

3 Axler: Chapter 5, Exercise 7


Problem Statement. Suppose n is a positive integer and T L(Fn) is defined by
T(x ,...,x ) = (x + ⋅⋅⋅+ x ,...,x +⋅⋅⋅+ x );
   1     n     1        n     1        n

in other words, T is the operator whose matrix (with respect to the standard basis) consists of all 1’s. Find all eigenvalues and eigenvectors of T.

Problem Solution. If (x1,,xn) were to be an eigenvector of T, then it must have the property that

              (              )
               ∑        ∑
λ(x1,...,xn) =     xi,...,   xi
                i        i

indicating that each xj is 1∕λ of the sum of all the elements comprising the vector. Hence all eigenvectors of T must be of the form (x,,x).

4 Axler: Chapter 5, Exercise 8


Problem Statement. Find all eigenvalues and eigenvectors of the backward shift operator T L(F) defined by
T (z1,z2,z3,...) = (z2,z3,...)

Problem Solution. The eigenvectors are such that

(λz1,λz2,λz3,...) = (z2,z3,...)

where λ is a possible eigenvector of T. Thus we have that z2 = z1λ, z3 = z2λ, z4 = z3λ,; meaning that the ith element of every eigenvector corresponding to an eigenvalue, λ, of T is of the following form.

zi = z1λi-1

Thus the an eigenvector of T is going to be a vector whose terms are the terms of a geometric progression, and associated eigenvalue is going to be the multiplier (or as wikipedia tells me, the “common ratio”) of the said progression.

5 Axler: Chapter 5, Exercise 9


Problem Statement. Suppose that T L(V ) and dim(T(V )) = k. Prove that T has at most k + 1 distinct eigenvalues.

Problem Solution. Since the dimension of the image of T is k, then there is no linearly independent set of vectors of size larger than k in the image. Since eigenvectors for a given eigenvalue have the form

Tv = λv

then all vectors in a given set of eigenvectors corresponding to distinct eigenvalues will be in the image of T, and thus limited in size to k, which in turn limits the number of such eigenvectors to k. Those eigenvalues, however, would be non-zero, thus throwing in with them, the eigenvalue zero, we get that the number of distinct eigenvalues is limited to k + 1 (note that only non-zero eigenvalue have eigenvectors in the image, and not in the kernel).

6 Axler: Chapter 5, Exercise 10


Problem Statement. Suppose T L(V ) is invertible and λ F \{0}. Provethat λ is an eigenvalue of T if and only if 1∕λ is an eigenvalue of T-1.

Problem Solution. Let T L(V ) be an invertible mapping and, a nonzero, λ F be an eigenvalue of T. Let v be an eigenvector for corresponding to λ. In this case we have the following.

Tv  =   λv
          -1
1v  =   λT  v
-v  =   T-1v
λ
Therefore we can see that the inverse of T has 1∕λ as an eigenvalue. The proof of the reverse direction is equivalent to the above, making the obvious substitution.

7 Axler: Chapter 5, Exercise 11


Problem Statement. Let S,T L(V ). Prove that ST and TS have the same eignevalues.

Problem Solution. Let λ be an eigenvalue of ST. Then there exists a v V such that STv = λv. Multiplying both sides by T we get

TS(Tv) = T(λv) = λ(Tv)

and thefore λ is also an eigenvalue for TS.

8 Axler: Chapter 5, Exercise 12


Problem Statement. Suppose that T L(V ) is such that every vector in V is an eigenvector of T. Prove that T is just a scalar multiple of the identity operator.

Problem Solution. I don’t know how to prove this for infinite dimensional vector spaces, so assuming finite dimensional, we1 We have that for v V with eigenvalue λ where dimV = n and λ1,n, which may or may not be distinct, are the eigenvalues for some basis v1,,vn.

               Tv  =  T v
               λv  =  T (a1v1 +⋅⋅⋅+ anvn)
λ(a1v1 + ⋅⋅⋅+ anvn) =  a1λ1v1 + ⋅⋅⋅+ anλnvn

a1λv1 + ⋅⋅⋅+ anλvn =  a1λ1v1 + ⋅⋅⋅+ anλnvn
Hence, we can see that every eigenvalue of each vector in the above basis is the same. This implies that every vector in the space has the same eigenvalue, and, calling that value λ, then T = λI.

9 Axler: Chapter 5, Exercise 15


Problem Statement. Suppose F = C, T L(V ), p P(C), and a C. Prove that a is an eigenvalue of p(T) if and only if a = p(λ) for some eigenvalue λ of T.

Problem Solution. Let a = p(λ) be an eigenvalue of p(T). Thus there exists some vector v such that

p(T)v = av = p(λ)v.

This indicates that

  (a  + a T + a T2 + ⋅⋅⋅+ a Tm)v  =  (a  +a λ + a λ2 + ⋅⋅⋅+a λm)v
   0    1    2 2        m   m        0   1     2  2      m    m
a0v +a1T v+ a2T v+ ⋅⋅⋅+ amT  v  =  a0v + a1λv+ a2λ v+ ⋅⋅⋅+ amλ  v
that is
Tv = λv,

and λ is an eigenvalue of T.

Conversely if we suppose that λ is an eigenvalue of T, then there is a vector v such that

Tv = λv.

Thus applying p(T) to v we get the following.

                        2           m
p(T)v  =   (a0 +a1T + a2T  +⋅⋅⋅+ amT  )v
       =   a0v + a1Tv+ a2T2v + ⋅⋅⋅+ amT mv
       =   a0v + a1λv+ a2λ2v+ ⋅⋅⋅+ amλmv

       =   p(λ)v
Hence, a = p(λ) is an eigenvalue of p(T).

10 Axler: Chapter 5, Exercise 16


11 Axler: Chapter 5, Exercise 21


Problem Statement. Suppose P L(V ) and P2 = P. Prove that V = kerP P(V ).

Problem Solution. 2 Let us refer to kerP P(V ) as W. Assume by way of contradiction that V is not W. We know that W is a subspace of V since both the image and the kernel of P are subspaces of V , which implies, by our assumption, that V is not a subspace of W. Thus there exists at least one basis vector of V that is not in W. Let vbe such a basis vector, and allow vk1,,vk2 and vi1,,vi2 to be the basis vectors of the kernal and the image of P, respectively.

Since vis outside of the kernel of P, then Pvis representable as a linear combination of vi1,,vi2 and since P2 = P, then

                     P v′ =   a P v + ⋅⋅⋅+ a P v
  ′                            i1   i1        i2   i2
Pv - ai1Pvi1 - ⋅⋅⋅- ai2Pvi2 =  0
 P (v′ - ai1vi1 - ⋅⋅⋅- ai2vi2) = 0.
Hence, v′- ai1vi1 -⋅⋅⋅- ai2vi2 is in the kernel of P. Thus we in turn have the following,
                        v′ - ai1vi1 - ⋅⋅⋅- ai2vi2 = ak1vk1 + ⋅⋅⋅+ ak2vk2
v′ - ai1vi1 - ⋅⋅⋅- ai2vi2 - ak1vk1 - ⋅⋅⋅- ak2vk2 = 0
however now we have that there is a linear combination of some basis vectors that equal zero, which contradicts the basis vectors’ (or any subset) linear independence. Hence our assumption was incorrect and V is indeed W.

12 Finding eigenvalues and eigenvectors.


We are looking for the eigenvalues of the matrix
    (      )
A =    1 1
       1 0

which, seeing as we “don’t know” about determinents yet, we will attack by brute force. Solve the equation

(  1  1) (  a)     ( a )
   1  0     b  = λ   b

for λ and hope we get something good in return. Alas, we get the following two equations

a+ b  =  λa

   a  =  λb
from which we can derive that λ2 = λ + 1, i.e. its the golden ratio!! Thus the two eigenvalues are the roots, φ+ and φ-, of the previous equation. Hence any vectors (a b)T that satisfy the above system of equations will be eigenvectors. Furthermore, since the eigenvalues are distinct, then the corresponding eigenvectors will be linearly independent, which in turn implies that the eigenvectors will also be a basis since the vector space in question has dimension two. Hence a matrix of the form
( a  a′ )
  b  b′

where (a b)T and (a b)T are eigenvectors corresponding to φ+ and φ-, respectively, will be invertible according to Artin [1, pg 96]. Continuing this run of “furthermores” and “hences”, we thus have that A has a diagonal matrix with respect to the basis of its eigenvectors, as per Axler [2, pg 88-89].

Let’s let b of the eigenvectors (a b)T be 1 for both eigenvectors corresponding to φ+ and φ-. Hence our eigenvectors are

(     )         (     )
  φ+              φ-
   1      and      1

Saving you the eye-sore of a calulation that is finding the inverse of

     (         )
M  =   φ+   φ-
        1   1
(12.1)

via the stick-the-identity-matrix-on-the-right-and-convert-to-rref method, we have that M has

       (  --1---   ----1--   )
M  -1 =   φ+--φ1--   φ+∕φ--11--
          φ+-φ-  1 + φ+∕φ--1

as an inverse. Man that’s nasty.3

Now we have M-1AM will be a diagonal matrix equal to

    (  φ+   0  )
D =    0   φ-   .

Thus we can write a closed form expression of At by MDtM-1, or in a way more conducive to understanding the computational simplicity,

   (         )
     φt+   0      -1
M     0  φt-   M

See the appendix for the Octave output showing the differences between the one form and the other.

References

[1]   Artin, Michael. Algebra. Prentice Hall. Upper Saddle River NJ: 1991.

[2]   Axler, Sheldon. Linear Algebra Done Right 2nd Ed. Springer. New York NY: 1997.

Appendix

We see below that all of the answers turn out to be the same whether calculating At directly or by using its representation in the basis of it’s eigenvectors.

  octave:1> A = [ 1 1 ; 1 0 ];  
  octave:2> varphip = (1+sqrt(5))/2;  
  octave:3> varphim = (1-sqrt(5))/2;  
  octave:4> D = [ varphip 0 ; 0 varphim ];  
  octave:5> M = [ varphip varphim ; 1 1 ];  
  octave:6> Minv = M^-1;  
  octave:7> [ A (M*D*Minv) ]  
  ans =  
 
     1.00000   1.00000   1.00000   1.00000  
     1.00000   0.00000   1.00000  -0.00000  
 
  octave:8> [ A^2 (M*D^2*Minv) ]  
  ans =  
 
     2.00000   1.00000   2.00000   1.00000  
     1.00000   1.00000   1.00000   1.00000  
 
  octave:9> [ A^3 (M*D^3*Minv) ]  
  ans =  
 
     3.0000   2.0000   3.0000   2.0000  
     2.0000   1.0000   2.0000   1.0000  
 
  octave:10> [ A^4 (M*D^4*Minv) ]  
  ans =  
 
     5.0000   3.0000   5.0000   3.0000  
     3.0000   2.0000   3.0000   2.0000  
 
  octave:11> [ A^5 (M*D^5*Minv) ]  
  ans =  
 
     8.0000   5.0000   8.0000   5.0000  
     5.0000   3.0000   5.0000   3.0000