November 14, 2014

http://coursework.tylerlogic.com/courses/upenn/math508/homework09
Let c ∈ ℝ be positive, f ∈ be an even function, and g ∈ and be an odd function.

We can separate ∫

Through a change of variable, setting x = h(u) where h(u) = -u, we obtain

We can separate ∫

Through a change of variable, setting x = h(u) where h(u) = -u, we obtain

Since f(-x)g(-x) = f(x)(-g(x)) = -f(x)g(x) then f(x)g(x) is an odd function. By the previous part of this problem

Let’s guess that f(t) = e

| (2.1) |

Defining g(s) = we have g(0) = 0 and g(x^{2
}) = x so that by putting t = g(s) we have

so that we see our definition of f works with c = 1.

Let f(x) = and define a partition P of [0,2] by P = {0 = x

and

leaving us with an error in estimation of

Hence, if we’d like the error in our computation of ∫
_{0}^{2}dx to be less than 1∕100, then we need 4∕N < 1∕100, i.e.
N > 400.

Let f(s) be a smooth function and c be a constant. Define

Applying the fundamental theorem of calculus then yields the following four equations

and so we see that

If we set

| (5.2) |

then

and

Let 0 < c < 1. Let u(x) and w(x) be solutions to the differential equation w′′ + c

| (5.3) |

Thus the v must have the form

for some a and b as it is the solution of the homogenous equation in 5.3. Hence v(0) = asin(0) + bcos(0) = b = w(0) -u(0) = 0 so that b is 0. Furthermore v(π) = asin(cπ) = w(π) - u(π) = 0 so that asin(cπ) = 0. Since 0 < c < 1, then a = 0. Thus v(x) = 0, which implies w and u are the same. Hence there is only one unique solution when 0 < c < 1.

Let c = 1, in which case we have u′′ + u = f. Thus with two applications of integration by parts we get

Applying are result from part (a) to the situation when c = 1, the previous part implies equation 5.2 becomes

Let L be the differential operator defined by Lw = -w′′ + c(x)w on the interval J = [a,b], where c(x) is some continuous function. Define the inner product by

We first layout a helpful equation obtained via two applications of integration by parts:

In considering (λ

If Lu = 0 and Lv = f, then we have the following

We have that

so that

This is virtually problem 5 of the previous homework.

The value ∥f∥

- ∫
_{0}^{1}|f(x)|dx ≥ 0 with equality only when f = 0. - ∫
_{0}^{1}|cf(x)|dx = |c|∫_{0}^{1}|f(x)|dx = |c|∥f∥ - ∫
_{0}^{1}|f(x) + g(x)|dx ≤∫_{0}^{1}|f(x)| + |g(x)|dx = ∥f∥ + ∥g∥

which make it a norm on C([0,1]).

Since the graph of |sin(λx)| for an arbitrary λ is just a sequence of concave “humps”, we can find the area under a single “hump” and then multiply that times the fraction of these “humps” that are between 0 and 1. One such “hump” is the left-most one in [0,1]. It’s area is that of the area under |sin(λx)| on the interval [0,]. Furthermore there are of these “humps” over the interval [0,1]. Hence we have

However, on the interval [0,], sin(λx) is positive and so from the above equation we get

Now if we set x = g(u) where g(u) = , by a change of variable, we get

since g′(u) = , g(π) = , and g(0) = 0. Hence we are left with

Define g(x) = f(x) - c. Then lim

Without loss of generality we may assume that c = 0. Then, since f is coninuous and lim_{x→∞} = 0, f is bounded.
Thus there is an M such that |f(x)| < M. Let ε > 0 be given. Also becuase f is continuous, there exists a t
such that for x with 0 < t < x we have |f(x) - 0| < ε∕2. Hence we have the following sequence of equations

which implies

as desired.

Let f : [0,1] → ℝ be a continuous function such that

for all continuous functions g(x). In particular if g(x) = f(x) we have

Since f(x)^{2} ≥ 0 we must then have f(x)^{2} = 0, and so therefore f(x) = 0.

This is not true. By problem 6(b) of homework 6, the function

approaches L as x approaches ∞. Hence we have that the function g_{a,b}(x) defined by h(x-a)h(b-x) is zero everywhere
except for (a,b) where a,b > 0. Furthermore g(x) ∈ C^{1}

So, assume for later contradiction that f(x)≠0 is such that ∫
_{0}^{1}f(x)g(x) = 0 for all g(x) in C^{1}. Then there must be a
point x_{0} where f is positive. Then there exists a,b ∈ [0,1] with a < b such that x_{0} ∈ (a,b) and f is positive on all of (a,b).
However, since this is the case, then fg_{a,b} where g_{a,b} is as defined above will be positive on (a,b) and zero everywhere else,
i.e.

a contradiction.