Math 312: Linear Algebra
Practice Set 1
Lawrence Tyler Rush
<me@tylerlogic.com>
July 11, 2012
http://coursework.tylerlogic.com/math312/practice01
Systems of Linear Equations and Matrices
1 Select problems from the book
__
(a) Problem 2 §1.2
The
following matrix is the zero matrix in M3×4(F), where 0 is the 0 element of F.
(b) Problem 2 §3.4
— (a) —
From the following system of equations
we get the following augmented matrix
and can row reduce it.
This gets us that x1 = 4, x2 = −3, and x3 = −1
— (b) —
From the following system of equations
we get the following augmented matrix
and can row reduce it.
Now we see that x3 is a free variable, so the resulting solution is
(c) Problem 5 §3.4
By
Theorem 3.16 we know that the first (a1), second (a2), and fourth (a4) columns of A are linearly independent, and,
moreover, we can obtain the values of a column in A by multiplying by the corresponding values of the RREF(A), say
(d1 d2 d3)T, in the following manner
Using this we get that the third and fifth columns of A (we already know the others) are:
and
respectively. This results in A as the following.
2 Explain if the following are true or false:
(a) Every homogeneous system of linear equations has a solution.
This
is true, we can always set all variables to zero. For a possibly more convincing argument, we see by Theorem 3.8 that
the set of solutions for a homogeneous system of linear equations is equal to the null space of the linear
transformation induced by left multiplication of a matrix A, where A is the matric in the equation Ax = 0.
Since the nullspace is a subspace of whatever vector space we are dealing with, it always contains the zero
vector.
(b) Every system of linear equations has a solution.
This
is false, as the following system has no solution.
(c) An m × n matrix has m rows and n columns.
This
is true by definition of a matrix.
(d) For all matrices A,B ∈ Mn×n(F) one has A ⋅ B = B ⋅ A
The
is true for n = 1 since in that case the multiplication is contained within the field F itself, and multiplication is commutative
in fields.
This is not true otherwise. For any n > 1 we can simply take A to be a matrix in which every entry is zero with the
exception of A1n. Similarly we take B to be a matrix in which again every entry is zero, but this time with the exception of
Bn1. With these two matrices, both A⋅B and B ⋅A with have all zeros except for one entry; unfortunately the entries are
different “locations”, (A ⋅ B)11 = 1 and (B ⋅ A)nn = 1.
(e) For all matrices A,B ∈ Mn×n(F) one has A + B = B + A
This
is true as Mn×n(F) is a vector space and the commutativity here is a vector space axiom.
(f) If, after a certain number of elementary row operations, an augmented matrix contains a row of the
form (00∣a) with a≠0, then the associated non-homogeneous system is inconsistent (i.e. has no
solution).
Because of the fact that, when elementary row operations are performed on a matrix, the solution set of the system
associated with the initial matrix is the same as the solution set of the system associated with the resulting matrix
(Corollary to theorem 3.13), then this is indeed true.
3 Echelon, RREF, or neither and why. Find RREF if its not in
RREF.
(a) ( 1 0 1 ; 1 1 0 ; 3 2 1 )
This
is not in echelon form, nor in RREF. Among others, the pivot in the third row is not one. This trait is required of both
echelon form and RREF. The RREF is as follows.
(b) ( 1 0 2 ; 0 1 1 ; 0 0 1 )
This
is in echelon form but not in RREF because there exist non-zero terms above the pivot in row three. The RREF is simply
I3 in this case.
(c) ( 1 0 0 ; 0 0 1 ; 0 1 0 )
This
is neither in echelon form nor RREF since the pivot of the third row is not to the right of the pivot of row two. The RREF
here is again I3.
4 Find examples of the following.
(a) Two 2 × 2 matrices A,B such that A ⋅ B≠B ⋅ A
Letting A and B be
and
respectively, we see that
but that
(b) Two 3 × 3 matrices A,B such that A ⋅ B≠B ⋅ A
Let’s
play the same trick once more by letting A and B be
and
respectively, we see that
but that
Another sneaky thing that we could have done would have been to just buffer our previous 2 × 2 example with some zeros
below and to the right. This buffering with zeros should work in all cases since the four upper left entries will result in the
same products as those in the 2 × 2 case. And now looking at what Artin has to say, I see that we can split the matrix
multiplication like so [1, p. 8],
So we can see that appropriately buffering lower-dimensional matrices, which are not commutative, with zeros can give us
higher-dimensional non-multiplicativly-commutative matrices.
(c) Two 3 × 3 matrices A,B with A,B≠0,I3 such that A ⋅ B = B ⋅ A
Let’s
follow along the path the identity matrix and play with variants of it, say
since we know that the identity matrix commutes with any other matrix, with respect to matrix multiplication. Thus we
have
and
so we have what we are looking for.
(d) A 2 × 2 matrix A≠0 such that A2 = A ⋅ A = 0
This
amounts to needing zeros in the right spots; probably the more the better. Values on the center most likely won’t work,
given what multiplication by the identity does. Given this, lets try the following.
This is obviously not equal to zero, and we see that
(e) A 3 × 3 matrix A≠0 such that A2 = A ⋅ A = 0
Let’s
do the same thing as before. Let A be the following.
So we get
(f) Two 2 × 2 matrices A,B with A,B≠I2 such that A ⋅ B = I2
Because we need two non-identity matrices which when multiplied by each other result in the identity, then we need
matrices which are inverses of each other... i.e. they are both invertible. Since we know that elementary matrices are
invertible, and any invertible matrix can be obtained via elementary row operations on the identity matrix (Corollary 3
Theorem 3.6), let’s try to find a simple elementary matrix that will give us what we are looking for. How about just
multiplying one row of the identity by 2. Then we have
Hence for
we need
which means a = ,b = 0,c = 0, and d = 1. Confirming, we see that
Vector Spaces
Explain if the following are vector spaces with the usual notion of addition and multiplication.
Also, for some of these problems, let Ca,b be the closed interval from a to b, that is [a,b].
1 Fn, as vector space over F
__
Yes this
is a vector space. This is so due to the component-wise nature of the elements of Fn.
2 ℂn, as vector space over ℝ
This
is a vector space. Again due to the component-wise nature.
3 ℂn, as vector space over ℚ
This
is a vector space. Again due to the component-wise nature, although here we need to watch out for the possibility of lack of
closure because we are over the rationals... but I don’t think we have that here.
4 The set C0(C0,1) of continuous functions f : C0,1 → ℝ over ℝ
This
is a vector space.
5 The set C0(C0,1) of continuous functions f : C0,1 → ℝ over ℚ
This
is a vector space.
6 The set of non-negative real numbers, as a vector space over
ℝ
This
is not a vector space since any non-zero number in the set would not have an additive inverse as the negative reals are not
contained in the set.
7 ℂ as a vector space over ℝ
This
is a vector space.
8 The set of polynomials f ∈ F(x) such that f(0) = 0, as a vector space
over F
This
is a vector space.
9 The set of polynomials f ∈ F(x) such that f(0) = 1, as a vector space
over F
This
is not closed under addition of elements of the set. For f,g in the set we have
Hence addition is not closed.
10 The set of twice differentiable functions f : C0,1 → ℝ such that
f′′ + f = 0
This
is a vector space. It is closed under addition of functions, which I was initially skeptical of.
11 The set of twice differentiable functions f : C0,1 → ℝ such that
f′′ + f = 1
This
is not a vector space. It is not closed under addition of two of its elements. Let f,g be functions of this set in question. Then
we have the following.
This must be one in order to be contained within the set, but it’s not.
References
[1] Artin, Michael. Algebra. Prentice Hall, Upper Saddle River, NJ. 1991.