So aw + bw′∈ W, and thus W is a subspace.
for a,b,c,d ∈ F. Matrices in W will have the above form with the restriction that a + d = 0, i.e. d = −a, so actually, all elements of W will have the form
With this form, one basis (maybe call it the standard basis for W) would be
since these matrices are linearly independent and
With this, W has dimension three.
Generally false, since inverses are unique and the inverse of [T]βα is [T−1]αβ by Theorem 2.8 [1, p. 101]. This could potentially be true if V , W, α, β, and T were chosen appropriately, say V = W and α = β.
True since [T]βα[v]α = [T(v)]β for v ∈ V .
False. The space M2×3 has dimension 6 and F5 has dimension 5.
True. Both Pn(F) and Pm(F) are over the same field and have the same dimension if and only n = m, so by Theorem 2.19 [1, p. 103], they are isomorphic.
False. The product of
and
will be the two-by-two identity matrix, but neither are invertible as they are not square matrices.
True since A−1 satisfies the requirements of being an inverse of (A−1)−1,
and inverses are unique [1, p. 100].
True by corollary two of theorem 2.18 [1, p. 102].
True. If A were an m × n matrix where m≠n, then there would not exist any matrix B such that AB = BA since AB and BA would have different dimensions.
The linear transformation T is not invertible. The vector spaces ℝ2 and ℝ3 have dimension 2 and 3, respectively, so they are not isomophic to each other. Because they are not isomorphic, no linear transformation between them can be invertible; more specifically T cannot.
The reasoning of the previous problem stands here as well.
This linear transformation is invertible.
Let (a,b,c),(a′,b′,c′) ∈ ℝ3 be such that T(a,b,c) = T(a′,b′,c′). Then (3a − 2c,b,3a + 4b) = (3a′− 2c′,b′,3a′ + 4b′), which requires that b = b′. Due to this, 3a + 4b′ = 4a′ + 4b′ implies that a = a′ which in turn tells us that 3a′− 2c = 3a′− 2c′, and thus c = c′. So T is one-to-one.
Let (x,y,z) ∈ ℝ3. Seeing that for v =
we have that T is onto. Because T is one-to-one and onto, then it is invertible.
P3(ℝ) and P2(ℝ) differ in dimension, so as before, there is no invertible linear transformation between them.
M2×2 and P2(ℝ) differ in dimension, so as before, there is no invertible linear transformation between them.
This is an invertible linear transformation. Define U : M2×2 → M2×2 by the following.
With this definition we see
and
so U ∘ T = IV and T ∘ U = IW. Thus T is invertible.
The vector spaces F3 and P3(F) are not isomorphic as they do not have the same dimension. The former has dimension 3 and the latter, 4.
The vector spaces F4 and P3(F) are isomorphic since they have the same dimension and are both over F.
The vector spaces M2×2 and P3(F) are isomorphic since they have the same dimension and are both vector spaces over F.
The vector space V has a dimension of three, as seen in problem 1, but ℝ4 has dimension four, so these two vector spaces are not isomorphic.
and
This implies that B−1A−1 is the inverse of AB. Moreover, (AB)−1 = B−1A−1.
and
we know then that At is invertible and furthermore that its inverse is (A−1)t.
Let A2 = 0. Assume for later contradiction that A is invertible. Then we have that AA−1 = I, which implies that
If AB = 0 for some nonzero n×n matrix B then A cannot be invertible as being so would contradict our result of our proof for the previous problem.
and
which informs us, respectively, that LA is onto and LB is 1 − 1. But since these two maps each have domain and codomain of Mn×1, namely a domain and codomain of equal dimension, then Theorem 2.5 [1, p. 71] gives us that LA and LB are bijections, that is they are invertible. Hence A and B are invertible too.
which was derived in an attempt to “undo” the definition of V to get back into the space F3. Let a,b ∈ F and
be elements of V . By the following, we can see that T indeed belongs to (V, F3).
With this definition, we have that for (a,b,c) ∈ F3
and for ∈ V
Hence T ∘ U = IF3 and U ∘ T = IV , so T is an isomorphism from V to F3.
So Φ is linear. Define Φ′ : Mn×n → Mn×n by Φ′(A) = BAB−1. Given this definition, we see
and
which together with the linearity of Φ yields to us that Φ is invertible.
Remembering from the first chapter of our book, [1], that a set of Lagrange polynomials in Pn(F) is a basis, we, in an attempt to find an inverse for T, define U : Fn+1 → Pn(F) using the Lagrange polynomials g0,…,gn corresponding to c0,…,cn by U(a0,…,an) = a0g0 + + angn. With this definition, we have that
where the coordinates of f in the basis g0,…,gn are a0,…,an. From this we can see that T is invertible with inverse U. Thus, since T is linear, it’s an isomorphism.
False. The jth column of Q is [xj]β.
True. It is the matrix representation of the identity transformation, which is invertible.
False. If Q changes β′ coordinates into β coordinates, then
False. The matrices A,B ∈ Mn×n are called similar if B = Q−1AQ for some Q ∈ Mn×n.
True by Theorem 2.23 [1, p. 112]. In this case one can be retrieve from the other by the change of coordinate matrix.
β = {e1,e2} and β′ = {(a1,a2),(b1,b2)}
β = {(−1,3),(2,−1)} and β′ = {(0,10),(5,0)}
β = {(2,5),(−1,−3)} and β′ = {e1,e2}
β = {(−4,3),(2,−1)} and β′ = {(2,1),(−4,1)}
and B be the matrix resulting from swapping the two rows of A. Since
we cannot find a matrix that will equal B via left multiplication by A, left alone an elementary matrix.
The rank is two:
The rank is three:
The rank is two:
The rank is one since the second row is just a multiple of the first row.
The rank is three:
The rank is one since all rows are multiples of the first.
First assume that A has zero rank. This implies that LA also has zero rank, that is, it has nullity of n since it’s domain is Fn. So LA(v) = 0 for all v ∈ Fn, in other words Av′ = 0 for all v′∈ Mn×1, but only the zero matrix does this. Hence A is the zero matrix.
Conversely, let A be the zero matrix. This has rank zero since LA is the zero map, which has zero rank.
Original
R2 − 2R1 → R2
R3 − R1 → R3
C2 − C1 → C2
C3 − C1 → C3
C4 − 2C1 → C4
C3 −C2 → C3
C4 − C2 → C4
R2 → R2
Original
R1 + 2R2 → R1
R3 + 2R2 → R3
Swap R1 and R2
−1R1 → R1
R2 → R2
R3 → R3
R1 + 2R2 → R1
R3 − R2 → R3
So the rank is two.
The elementary row matrices resulting from these operations, refer to Ei as the elementary row matrices for the ith step above, can be inverted in order to obtain A as their product, since E6E1A = I. So we get A = E1−1E6−1. The inverses are (note E2 = E5):
So
The reduced row echelon form of A is the following.
which informs us that the rank of A is three, i.e. the rank of LA is three as well. This then tells us the nullity is 2 since the domain of the linear map is F5. Since LA will map any item of its nullspace to zero the a matrix M with columns (coordinates) of vectors only from the nullspace of LA will have that AM is zero. To boot, we know that we can find such an M with rank two because the dimension of the nullspace of LA is two, i.e. the maximum size of any linearly independent set. So we’ll choose the two vectors which correspond to the free variables from the reduced row echelon form of A above, namely
So making M a matrix with all zero columns except for two columns which will be the columns immediately above will result in AM = 0.
Note that the ordering of the above columns in M is irrelevant since the matrix resulting from AM will have its ith column be Ami where mi is the ith column of M.
For any 5 × 5 matrix B with AB = 0, the columns of B must be contained within the nullspace of LA. Since that space has dimension two, then no linear independent set of vectors can have a size greater than two. This includes the columns of B, so rank(B) is two at most.
So
is the n × m matrix we require.
Multiplying both sides by leaves us with
and thus the m × n matrix A we are looking for is EkE1.
as well as
So I + A is the inverse of I − A.
and so βi∗ is linear.
Let f ∈ V ∗. Since f is completely determined by its action on a basis of V , like , then we need express f in terms of ∗ such that f(βi) = (a1β1∗ + + anβn∗)(βi) where a1β1∗ + + anβn∗ is the yet-to-be-determined representaiton of f in ∗. Since βi∗(βj) = δij (the Kronecker-δ), then we simply need to set ai = f(βi) to obtain our desired property.
So Tt is linear.
Given how we generate a linear functional with the dual basis ∗ we get
which can be rewritten making use of Tt’s definition
but for the way we defined the dual basis, ∗, the ij entry of this matrix is ith coordinate of Tt(cj∗) in ∗. So the matrix is then
which is just [Tt]∗∗
Conversely, allow Tt to be an isomorphism. Then [Tt]∗∗ is an invertible matrix. This is equal to the transpose of [T] and therfore T is an isomorphism since it is linear.
and thus EV = L.