August 23, 2012

http://coursework.tylerlogic.com/math55a/homework01
I don’t know exactly why this homework’s numbers one through six are mapped to Axler’s four through fourteen in
chapter one (I assume for weighting... since those problems seem simpler than the rest), but I will make problems one
through eleven of this homework map to four through fourteen of Axler’s first chapter, and then continue on with homework
problem seven being my twelve, eight being my thirteen, etc. The title of each problem should guide anyone perfectly.

Let a ∈ F and v ∈ V such that av = 0. Assume by way of contradiction that both a≠0 and v≠0. Therefore, letting v = (v

Replacing F with ℤ This property will still hold for the integers.

Keeping in mind that a subset of a vector space is a subspace if it is closed under vector addition and scalar multiplication and contains 0, we need only check these three things.

(a) Is {(x_{1},x_{2},x_{3}) ∈ F^{3} : x_{
1} + 2x_{2} + 3x_{3} = 0} a subspace of V ?

Certainly the subset contains zero being that 0 + 2(0) + 3(0) = 0. ✓

Letting (x,y,z) and (x′,y′,z′) be in this subset, then we have that both x + 2y + 3z = 0 and x′ + 2y′ + 3z′ = 0. This results in the following

and thus the subset is closed under addition. ✓

Futhermore, letting a be a scalar in F, we can see that

and therefore the subset is also closed under scalar multiplication. ✓

Hence we have that this subset defines a subspace of our vector space V .

(b) Is {(x_{1},x_{2},x_{3}) ∈ F^{3} : x_{
1} + 2x_{2} + 3x_{3} = 4} a subspace of V ?

This is not a subspace since 0 + 2(0) + 3(0)≠4 and hence the subset does not contain zero.

(c) Is {(x_{1},x_{2},x_{3}) ∈ F^{3} : x_{
1}x_{2}x_{3} = 0} a subspace of V ?

This is not a subspace since we can see that (1,1,0) and (0,0,1) are in the subset, but yet (1,1,0) + (0,0,1) = (1,1,1) would not be in the subset since 1(1)(1)≠0, and thus there is no closure under addition.

(d) Is {(x_{1},x_{2},x_{3}) ∈ F^{3} : x_{
1} = 5x_{3}} a subspace of V ?

Since 0 = 5(0), then zero is contained within this subset. ✓

Letting (x,y,z) and (x′,y′,z′) be in the subset, we know that both x = 5z and x′ = 5z′, which means that the following equation holds.

Thus the subset is closed under addition. ✓

And finally, with the following equation, letting a ∈ F^{3} we can see that the subset is also closed under scalar multiplication.
✓

The problem states that our subset needs to be nonempty, closed under addition, and closed under taking additive inverses, which informs us that the subset of our choosing must either not include zero or must fail to be closed under scalar multiplication. However, after a little thought about what it means to be closed under addition and taking additive inverses informs us about the fact that zero must also be in the set (the sum of an element and its inverse must be in the set, but what’s that?). So we are left with finding a set where scalar multiplication is not closed. When we should multiply an element of a set by a scalar from a field to “break” out of the set, then a discrete set should come to mind. To that end, for our example we choose a set dealing with intgers, in fact, we will choose ℤ

Since we only need the subset to be closed under scalar multiplication, then we need a subset that is not closed under addition or is not closed under taking inverses. Note that multiplication of an element by 0 ∈ ℝ will result in the zero vector, so our subset will always by default contain that element. We know from our middle/high school mathematics experience (or at least I do) that the multiplication of two binomials needs to be “foiled”, yielding those pesky middle terms. So we should be able to use this to our advantage. Like the subset in problem 2c above, we use an analogous set of {(x,y) ∈ ℝ

Well certainly an intersection any amount of subspaces of V will be a subset of V , so thus we only need the usual zero-addition-multiplication to be satisfied. Because we have that each of the sets in the intersection is a subspace of V , then they all contain the zero vector, and therefore, so does the intersection. Now let x be in the intersection of subspaces, which in turn means that x is in each of the individual subspaces and hence so is ax for any scalar a, which of course then means that the intersection contains ax as well, and thus we have the closure of scalar multiplication on the intersection. Also let y be a vector in the intersection. As with x above, y is also in each of the subspaces making up the intersection, and therefore, so is their sum, x + y. Thus the sum is also in the intersection, and the intersection is therefore closed under addition.

Replacing F with ℤ This property will still hold for the integers.

Let U

(→)

Let U

(←)

Letting U

Replacing F with ℤ This property will still hold for the integers.

7 Axler: Chapter 1, Problem 10

The subspace U + U would simply be U due to the closure of addition on U which will demand that any element that can be constructed by the definition of the sum of vector spaces is already contained in U.

Replacing F with ℤ This property will still hold for the integers.

8 Axler: Chapter 1, Problem 11

The operation of addition on subspaces is both commutative and associative due to the fact that the addition operation of vectors has both properties as well. Its pretty easy to see that if u

Replacing F with ℤ This property will still hold for the integers.

9 Axler: Chapter 1, Problem 12

As we saw earlier, the operation of addition on the subspaces of V will always have itself as an identity. Similarly, any subspace of U will be an identity for U, but note that U will not be an identity for the subspace, outside of the trivial subspace of U itself. So we can see that the identity is not actually unique. But this points us towards the more general idea that the addition operation of subspaces always ”expands” the subspace addends. This is due to the fact that each addend contains the zero vector, and thus the sum of two subspaces will always have at least all the elements from the larger of the two (or more) subspaces.

Well we know that a subspace, U, has an inverse, U^{−1} if U + U^{−1} = 0 where 0 is the identity of course. However, as a
result of what was previously discussed, there could potentially be many inverses of a subspace since there are multiple
identities for a given subspace. If we were to choose one of the trivial identities, U itself, then any subspace of U would be
the inverse of U. Although if we were to choose a subspace of U as the identity, then there would exist no subspace U^{−1}
since it is impossible to add a subspace of U to U and have, as a result, a subspace of U. This is, of course,
unless the subspace is U. This is simply again due to the closure of the subspace addition operation, and the
“expansion” mentioned earlier in the problem; subspace adding will expand, or at the very least, result in nothing
new.

Replacing F with ℤ This property will still hold for the integers.

10 Axler: Chapter 1, Problem 13

This seemingly looks so painfully true, but unfortunately it is not. One thing that stumped me and made me change thoughts is that for a given u

but w = w′ did not necessarily need to be true. So I began to think of previous problems (above) and specifically thought about how adding a vector space and one of its subspaces would yield the first vector space, and then constructed the following counter-example from knowing that.

Replacing F with ℤ This property will still hold for the integers.

11 Axler: Chapter 1, Problem 14

Let U be the subspace of (F) consisting of all polynomials of the following form.

| (11.1) |

Also let the subspace W of (F) consist of all polynomials of the following form.

| (11.2) |

From here it is easy to see that (F) = U + W since W basically “fills in” all the holes (being the powers of z) left by U to complete the set of polynomials over F. Futhermore, since it is not possible for polynomials of the form in 11.1 to be equal to polynomials of the form in 11.2 unless zero, then we have that the intersection of U and W must be the set containing only {0}.

Sure this proof is slightly hand-wavy, but it is due to, in part, its innate simplicity, and also because I don’t know of any theorem really that states that “two polynomials each with distinct powers of the input parameter can never be equal unless all coefficients are zero”. If I knew of such a theorem, that is what I would have used.

12 Prove That S_{A} And S_{A}^{0} Are Rings. Find The Bijection

In this proof, we let a, b, and c all be elements of S

Prove S_{A} is a ring.
Both “Abelianism” and associativity of addition on this set are proven using the same old trick of combining the
necessary elements using the definition of sum on S_{A}, then use the definition of sum on A, and the fact that A is a ring as
well to shift around the terms of each element of S_{A} appropriately for proving associativity or commutativity, and then
separate them again by using the inverse operation of sum on S_{A}. These, slightly annoying, but necessary manipulations
follow first for commutativity, then associativity.

Prove that S_{A}^{0} is a ring.
Here, like subspaces, we just need to check that S_{A}^{0} contains both sum and product identities and is closed under both
of the sum and product operations. Since the sum identity is all zeros, and the multiplicative identity is a one followed by
all zeros, then they are both contained in S_{A}^{0}. The set is closed under addition since the addition of a and b is pairwise and
the result, say r, will be such that r_{n} = 0 for all n > max(n_{a},n_{b}) where n_{a} and n_{b} are such that a_{i} = 0 and b_{j} = 0 for all
i > n_{a} and j > n_{b}.

Find an isomorphism from A to S_{A}^{0}.
Here we need to find an isomorphism. I like to believe I have a knack for finding them, and it seems like that is really the
only method. However, I can say that there are two things that seem to pop up time-after-time for me whenever I am
tasked with finding a isomorphisms or, more generally, bijections. First, keep it simple. For some reasons a lot of
isomorphisms that I have seen are not that complicated, and they always seem to ”make sense”. Second, there for some
reason seems to be nice symmetries involved in a lot of isomorphisms/bijections, especially ones that can be shown
pictorially.

Anyway, to get back to business, the mapping here is the following for a ∈ A.

Instantly we can easily see that this mapping takes both the sum and product identities to the sum and product
identities in S_{A}^{0}. Likewise the fact that additive inverses are taken to additive inverses are just as simple to see.
Also by the following set of equations we have that this mapping preserves the sum structure of A in S_{A}^{0}

13 The Commutativity and “(Integral Domain)-ness” of S_{A} and
S_{A}^{0}

Let the initial assumptions from the previous problems be the same for this problem.

(a) Prove S_{A} and S_{A}^{0} are commutative iff A is too.

Let S

Conversely, assume that A is a commutative ring. Then the following set of equations hold.

(b) Prove S_{A} and S_{A}^{0} are integral domains iff A is too.

Assume that S

then xy is also non-zero. Thus A is an integral domain.

Conversely, assume that A is an integral domain. Assume by way of contradiction that S_{A} is not an integral domain.
Therefore there exists an a,b ∈ S_{A} with a≠0 and b≠0 such that ab = 0. Let indicies j and k be such that a_{j}≠0 and a_{m} = 0
for all m < j. Allow the same for b and k as is for a and j, respectively. Without loss of generality, let j be less than k.
Therefore, since the nth term of ab is

| (13.11) |

then we have that a_{j}b_{k} will be a term in the summation in (ab)_{k+j}. Because a_{0} = a_{1} = = a_{j−1} = 0 and
b_{0} = b_{1} = = b_{k−1} = 0, then, given the indices of convolution (equation 13.11), a_{j}b_{k} is the only term in the summation
defining (ab)_{k+j} where neither of the operands is zero. Thus we have that (ab)_{k+j} = a_{j}b_{k}, and since ab = 0 then a_{j} and b_{k}
are zero divisors of A, since neither are zero, but this is a contradiction of the fact that A is an integral domain and thusly
has no zero divisors. Hence S_{A} must be an integral domain. Note that this proof applies without change to S_{A}^{0} as
well.

14 Prove that if A is a field, than neither S_{A} nor S_{A}^{0} are. Give a simple
description of the invertible elements of each.

This one seems like it shouldn’t be that hard. We have already shown that for any property of a field save for multiplicative inverses, if A has such a property, than S

15 “Two-sided” sequence consequence.

16 An electoral college computation example.