Some special functions

我的原文地址

Chapter 8: Some special functions

Contents
Contents
 1.  Power series
 2.  The exponential and logarithmic functions
 3.  The trigonometric functions
 4.  The algebraic completeness of the complex field
 5.  Fourier series
 6.  The Gamma function

1. Power series In this section we shall derive some properties of functions which are represented by power series, i.e., functions of the form f(x)=\sum _ {n=0}^{\infty}c _ nx^n, or more generally, f(x)=\sum _ {n=0}^{\infty}c _ n(x-a)^n.

These are called analytic functions.

We shall restrict ourselves to real values of x. Instead of circles of convergence we shall therefore encounter intervals of convergence.

If f(x)=\sum _ {n=0}^{\infty}c _ nx^n converges for all x in (-R, R), for some R > 0 (R may be +\infty), we say that f is expanded in a power series about the point x = 0. Similarly, if f(x)=\sum _ {n=0}^{\infty}c _ n(x-a)^n converges for |x-a|< R, f is said to be expanded in a power series about the point x=a. As a matter of convenience, we shall often take a=0 without any loss of generality.

Theorem 1. Suppose the series \sum _ {n=0}^{\infty}c _ nx^n converges for |x|<R, and define f(x)=\sum _ {n=0}^{\infty}c _ nx^n\, (|x|<R). Then the series converges uniformly on [-R+\varepsilon,R-\varepsilon], no matter which \varepsilon>0 is chosen. The function f is continuous and differentiable in (-R,R), and f'(x)=\sum _ {n=1}^{\infty}nc _ nx^{n-1}\, (|x|<R).

Corollary 2. Under the hypotheses of Theorem theorem811, f has derivatives of all orders in (-R, R), which are given by
\[f^{(k)}(x)=\sum _ {n=k}^{\infty}n(n-1)\dots(n-k+1)c _ nx^{n-k}.\]

In particular,
\begin{equation}
f^{(k)}(0)=k!c _ k\quad (k=0,1,2,\dots).
\end{equation}

Formula (81) is very interesting. It shows, on the one hand, that the coefficients of the power series development of f are determined by the values of f and of its derivatives at a single point. On the other hand, if the coefficients are given, the values of the derivatives of f at the center of the interval of convergence can be read of f immediately from the power series.

Note, however, that although a function f may have derivatives of all orders, the series \sum c _ nx^n, where c _ n is computed by (81), need not converge to f(x) for any x \neq 0. In this case, f cannot be expanded in a power series about x \neq 0. For if we had f(x) = \sum a _ nx^n, we should have n!a _ n=f^{(n)}(0); hence a _ n=c _ n.

If the series \sum c _ nx^n converges at an endpoint, say at x = R, then f is continuous not only in (-R, R), but also at x = R. This follows from Abel's theorem (for simplicity of notation, we take R=1):

Theorem 3. Suppose \sum c _ n converges. Put f(x)=\sum c _ nx^n\, (-1<x<1). Then \lim _ {x\to1}f(x)=\sum c _ n.

We now require a theorem concerning an inversion in the order of summation.

Theorem 4. Given a double sequence \{a _ {ij}\}, suppose that \sum _ {j=1}^{\infty}a _ {ij}=b _ i and \sum b _ i converges. Then
\[\sum _ {i=1}^{\infty}\sum _ {j=1}^{\infty}a _ {ij}=\sum _ {j=1}^{\infty}\sum _ {i=1}^{\infty}a _ {ij}.\]

Theorem 5. Suppose f(x)=\sum c _ nx^n, the series converging in |x|<R. If - R<a<R, then f can be expanded in a power series about the point x = a which converges in |x - a|<R - | a |, and
\begin{equation}
f(x)=\sum _ {n=0}^{\infty}\frac{f^{(n)}(a)}{n!}(x-a)^n\quad (|x-a|<R-|a|).
\end{equation}

This is also known as \textbf{Taylor's theorem}.

It should be noted that (82) may actually converge in a larger interval than the one given by |x - a|<R - | a |.

If two power series converge to the same function in (-R, R), (81) shows that the two series must be identical, i.e., they must have the same coefficients. It is interesting that the same conclusion can be deduced from much weaker hypotheses:

Theorem 6. Suppose the series \sum a _ n x^n and \sum b _ n x^n converge in the segment S = (-R, R). Let E be the set of all x\in S at which \sum a _ nx^n=\sum b _ nx^n. If E has a limit point in S, then a _ n= b _ n for n = 0, 1, 2, \dots. Hence \sum a _ nx^n=\sum b _ nx^n holds for all x\in S.

Proof. Theorem theorem812. Let s _ n=c _ 0+\dots+c _ n, s _ {-1}=0. Then for |x|<1,
\[
\sum _ {n=0}^{m}c _ nx^n=\sum _ {n=0}^{m}(s _ n-s _ {n-1})x^n=(1-x)\sum _ {n=0}^{m-1}+s _ mx^m\Longrightarrow f(x)=(1-x)\sum _ {n=0}^{\infty}s _ nx^n.
\]
Suppose s=\lim s _ n. Let \varepsilon>0 be given. Choose N so that n>N implies |s-s _ n|<\varepsilon/2. Then, since (1-x)\sum x^n=1\, (|x|<1), we obtain
\[|f(x)-s|=\Big|(1-x)\sum _ {n=0}^{\infty}(s _ n-s)x^n\Big|\leqslant(1-x)\sum _ {n=0}^{N}|s _ n-s||x|^n+\frac\varepsilon2 \leqslant\varepsilon\]
if x>1-\delta, for some suitably chosen \delta > 0. This implies \lim _ {x\to1}f(x)=\sum c _ n.

Theorem theorem813. We could establish the desired formula by a direct procedure similar to (although more involved than) the one used in the last theorem in Chapter 3. However, the following method seems more interesting.

Let E be a countable set, consisting of the points x _ 0 , x _ 1, x _ 2 , \dots, and suppose x _ n\to x _ 0 as n\to\infty. Define
\[f _ i(x _ 0)=\sum _ {j=1}^{\infty}a _ {ij},\quad f _ i(x _ n)=\sum _ {j=1}^{n}a _ {ij},\quad g(x)=\sum _ {i=1}^{\infty}f _ i(x)\, (x\in E).\]

Now each f _ i is continuous at x _ 0. Since |f _ i(x)|\leqslant b _ i for x\in E, g(x) converges uniformly, so that g is continuous at x _ 0. It follows that
\begin{align*}
\sum _ {i=1}^{\infty}\sum _ {j=1}^{\infty}a _ {ij} & =\sum _ {i=1}^{\infty}f _ i(x _ 0)=g(x _ 0)=\lim _ {n\to\infty}g(x _ n)= \lim _ {n\to\infty}\sum _ {i=1}^{\infty}f _ i(x _ n) \\
& =\lim _ {n\to\infty}\sum _ {i=1}^{\infty}\sum _ {j=1}^{n}a _ {ij}=\lim _ {n\to\infty}\sum _ {j=1}^{n}\sum _ {i=1}^{\infty}a _ {ij} =\sum _ {j=1}^{\infty}\sum _ {i=1}^{\infty}a _ {ij}.
\end{align*}

Theorem theorem814. We have
\[
f(x)=\sum _ {n=0}^{\infty}c _ n[(x-a)+a]^n=\sum _ {n=0}^{\infty}c _ n\sum _ {m=0}^{n}\binom{n}{m}a^{n-m}(x-a)^m =\sum _ {m=0}^{\infty}\left[\sum _ {n=m}^{\infty}\binom{n}{m}c _ na^{n-m}\right](x-a)^m.
\]
This is the desired expansion about the point x=a. To prove its validity, we have to justify the change which was made in the order of summation. Theorem theorem813 shows that this is permissible if
\[\sum _ {n=0}^{\infty}\sum _ {m=0}^{n}\left|c _ n\binom{n}{m}a^{n-m}(x-a)^m\right|=\sum _ {n=0}^{\infty}|c _ n|\cdot(|x-a|+|a|)^n\]
converges. It converges if |x-a|+|a|<R.

Finally, the form of the coefficients in (82) follows from (81).

Theorem theorem815. Put c _ n=a _ n-b _ n and f(x)=\sum c _ nx^n\, (x\in S). Then f(x)=0 on E. Let A be the set of all limit points of E in S, and let B consist of all other points of S. It is clear from the definition of ``limit point" that B is open. Suppose we can prove that A is open. Then A and B are disjoint open sets. Hence they are separated. Since S = A \cup B, and S is connected, one of A and B must be empty. By hypothesis, A is not empty. Hence B is empty, and A = S. Since f is continuous in S, A \subset E. Thus E = S, and (81) shows that c _ n = 0 for n = 0, 1, 2, \dots, which is the desired conclusion.

Thus we have to prove that A is open. If x _ 0\in A, Theorem theorem814 shows that f(x)=\sum d _ n(x-x _ 0)\, (|x-x _ 0|<R-|x _ 0|). We claim that d _ n=0 for all n. Otherwise, let k be the smallest nonnegative integer such that d _ k\neq0. Then f(x)=(x-x _ 0)^kg(x)\, (|x-x _ 0|<R-|x _ 0|), where g(x)=\sum _ {m=0}^{\infty}d _ {k+m}(x-x _ 0)^{m}. Since g is continuous at x _ 0 and g(x _ 0)=d _ k\neq0, there exists a \delta>0 such that g(x)\neq0 if |x-x _ 0|<\delta. It follows that f(x)\neq0 if 0<|x-x _ 0|<\delta. But this contradicts the fact that x _ 0 is a limit point of E. Thus d _ n=0 for all n, so that f(x)=0 for all x in a neighbourhood of x _ 0. This shows that A is open, and completes the proof.
2. The exponential and logarithmic functions We define E(z)=\sum _ {n=0}^{\infty}\frac{z^n}{n!}. The ratio test shows that this series converges for every complex z. Applying theorem on multiplication of absolutely convergent series, we obtain the important addition formula E(z+w)=E(z)E(w) (z,w complex). One consequence is that E(z)E(-z)=E(0)=1 (z complex). This shows that E(z)\neq0 for all z. We have E(x)>0 for all real x. E(x)\to+\infty as x\to+\infty; E(x)\to0 as x\to-\infty along the real axis. x<y implies E(x)<E(y).

The addition formula also shows that E'(z)=E(z).

It can be shown that E(p)=e^p for all rational p.

We suggested the definition x^y=\sup x^p, where the sup is taken over all rational p such that p<y, for any real y, and x > 1. If we thus define, for any real x, e^x=\sup e^p\, (p<x,\,p\text{ rational}), the continuity and monotonicity properties of E, show that E(x)=e^x for all real x. This equation explains why E is called the exponential function.

The notation \exp(x) is often used in place of e^x, especially when x is a complicated expression.

Theorem 7. Let e^x be defined on \mathbb{R}^1. Then
  • e^x is continuous and differentiable for all x, and (e^x)'=e^x;
  • e^x is a strictly increasing function of x, and e^x>0;
  • e^{x+y}=e^xe^y;
  • \lim\limits _ {x\to+\infty}x^ne^{-x}=0, for every n.

Since E is strictly increasing and differentiable on \mathbb R^1, it has an inverse function L which is also strictly increasing and differentiable and whose domain is E(\mathbb R^1), that is, the set of all positive numbers. L is defined by E(L(y))=y\, (y>0), or, equivalently, L(E(x))=x, (x real). Differentiating, we get L'(y)=1/y\, (y>0). Since L(1)=0, we have \displaystyle L(y)=\int _ {1}^{y}\frac{dx}{x}. We also have L(uv)=L(u)+L(v),\, (u>0,\, v>0). This shows that L has the familiar property which makes logarithms useful tools for computation. The customary notation for L(x) is of course \ln x.

As to the limit behavior of \ln x, we have \ln x\to+\infty as x\to+\infty, \ln x\to-\infty as x\to0.

It is easily seen that x^\alpha=\exp(\alpha\ln x) for any rational \alpha and x>0. We now define x^\alpha, for any real \alpha and any x>0, by this formula. The continuity and monotonicity of E and L show that this definition leads to the same result as the previously suggested one.

If we differentiate it, we obtain (x^\alpha)'=\alpha x^{\alpha-1}. To prove this directly from the definition of the derivative, if x^\alpha is defined by the supremum and \alpha is irrational, is quite troublesome.

We wish to demonstrate one more property of \ln x, namely, \lim _ {x\to+\infty}x^{-\alpha}\ln x=0 for every \alpha>0. For if 0<\varepsilon<\alpha, and x>1, then
\[x^{-\alpha}\ln x=x^{-\alpha}\int _ {1}^{x}t^{-1}\, dt<x^{-\alpha}\int _ {1}^{x}t^{\varepsilon-1}\, dt =x^{-\alpha}\cdot\frac{x^\varepsilon-1}{\varepsilon}<\frac{x^{\varepsilon-\alpha}}{\varepsilon}.\]
3. The trigonometric functions Let us define
\[C(x)=\frac12[E(ix)+E(-ix)],\quad S(x)=\frac{1}{2i}[E(ix)-E(-ix)].\]
We shall show that C(x) and S(x) coincide with the functions \cos x and \sin x, whose definition is usually based on geometric considerations. Since E(\bar z)=\overline{E(z)}, C(x) and S(x) are real for x. Also,
\[E(ix)=C(x)+iS(x).\]
Thus C(x) and S(x) are the real and imaginary parts, respectively, of E(ix), if x is real. When x is real, |E(ix)|^2=E(ix)E(-ix)=1, so that |E(ix)|=1.

We have C(0)=1,\, S(0)=0,\, C'(x)=-S(x),\, S'(x)=C(x).

We assert that there exist positive numbers x such that C(x) = 0. For suppose this is not so. Since C(0) = 1, it then follows that C(x) > 0 for all x > 0, hence S'(x) > 0, hence S is strictly increasing; and since S(0) = 0, we have S(x) > 0 if x > 0. Hence if 0<x<y, we have S(x)(y-x)<\int _ {x}^{y}S(t)\, dt=C(x)-C(y)\leqslant2. Since S(x)>0, this cannot be true for large y, and we have a contradiction.

Let x _ 0 be the smallest positive number such that C(x _ 0) = 0. This exists, since the set of zeros of a continuous function is closed, and C(0) \neq 0. We define the number \pi by
\[\pi=2x _ 0.\]
Then C(\pi/2)=0, and S(\pi/2)=\pm1. Since C(x)>0 in (0,\pi/2), S is increasing in (0,\pi/2); hence S(\pi/2)=1. Thus E(\pi i/2)=i, and the addition formula gives E(\pi i)=-1,\, E(2\pi i)=1; hence E(z+2\pi i)=E(z) (z complex).

Theorem 8. \phantom{null}
  1. The function E is periodic, with period 2\pi i.
  2. The functions C and S are periodic, with period 2\pi.
  3. If 0<t<2\pi, then E(it)\neq1.
  4. If z is a complex number with |z|=1, there is a unique t in [0,2\pi) such that E(it)=z.

Proof. Suppose 0<t<\pi/2 and E(it) = x + iy, with x, y real. Our preceding work shows that 0<x<1, 0<y<1. Note that E(4it)=x^4-6x^2y^2+y^4+4ixy(x^2-y^2). If E(4it) is real, it follows that x^2-y^2=0; since x^2+y^2=1, we have x^2+y^2=1/2, hence E(4it)=-1.

If 0\leqslant t _ 1<t _ 2<2\pi, then E(it _ 2)E(it _ 1)^{-1}=E(it _ 2-it _ 1)\neq 1. This establishes the uniqueness assertion in (4). To prove the existence assertion, fix z so that |z|=1. Write z=x+iy, with x and y real. Suppose first that x\geqslant0 and y\geqslant0. On [0,\pi/2], C decrease from 1 to 0. Hence C(t)=x for some t\in[0,\pi/2]. Since C^2+S^2=1 and S\geqslant0 on [0,\pi/2], it follows that z=E(it). If x<0 and y\geqslant0, the preceding conditions are satisfied by -iz. Hence -iz=E(it) for some t\in[0,\pi/2], and since i=E(\pi i/2), we obtain z=E(i(t+\pi/2)). Finally, if y<0, the preceding two cases show that -z=E(it) for some t\in(0,\pi). Hence z=-E(it)=E(i(t+\pi)).

It follows from (4) that the curve \gamma defined by \gamma(t)=E(it)\, (0\leqslant t\leqslant2\pi) is a simple closed curve whose range is the unit circle in the plane. Since \gamma'(t)=iE(it), the length of \gamma is \int _ {0}^{2\pi}|\gamma'(t)|\, dt=2\pi. This is of course the expected result for the circumference of a circle of radius 1. It shows that \pi, defined by our preceding formula, has the usual geometric significance.

In the same way we see that the point \gamma(t) describes a circular arc of length t _ 0 as t increases from 0 to t _ 0. Consideration of the triangle whose vertices are z _ 1=0, z _ 2=\gamma(t _ 0), z _ 3=C(t _ 0) shows that C(t) and S(t) are indeed identical with \cos t and \sin t, if the latter are defined in the usual way as ratios of the sides of a right triangle.

It should be stressed that we derived the basic properties of the trigonometric functions, without any appeal to the geometric notion of angle. There are other nongeometric approaches to these functions.
4. The algebraic completeness of the complex field We are now in a position to give a simple proof of the fact that the complex field is algebraically complete, that is to say, that every nonconstant polynomial with complex coefficients has a complex root.

Theorem 9. Suppose a _ 0,\dots,a _ n are complex numbers, n\geqslant1, a _ n\neq0, P(z)=\sum _ {k=0}^{n}a _ kz^k. Then P(z)=0 for some complex number z.

Proof. Without loss of generality, assume a _ n= 1. Put \mu=\inf |P(z)| (z complex). If |z|=R, then |P(z)|\geqslant R^n[1-|a _ {n-1}|R^{-1}-\dots-|a _ 0|R^{-n}]. The right tends to \infty as R\to\infty. Hence there exists R _ 0 such that |P(z)|>\mu if |z|>R _ 0. Since |P| is continuous on the closed disc with center at 0 and radius R _ 0, |P(z _ 0)| = \mu for some z _ 0. We claim that \mu=0. If not, put Q(z)=P(z+z _ 0)/P(z _ 0). Then Q is a nonconstant polynomial, Q(0)=1, and |Q(z)|\geqslant1 for all z. There is a smallest integer k, 1<k<n, such that Q(z)=1+b _ kz^k+\dots+b _ nz^n, b _ k\neq0. There is a real \theta such that e^{ik\theta}b _ k=-|b _ k|. If r>0 and r^k|b _ k|<1, it implies |1+b _ kr^ke^{ik\theta}|=1-r^k|b _ k|, so that |Q(re^{i\theta})|\leqslant 1-r^k(|b _ k|-r|b _ {k+1}|-\dots-r^{n-k}|b _ n|). For sufficiently small r, the expression in braces is positive; hence |Q(re^{i\theta})|<1, a contradiction. Thus \mu = 0, that is, P(z _ 0) = 0.
5. Fourier series

Definition 10. A \textbf{trigonometric polynomial} is a finite sum of the form
\begin{equation}\label{83}
f(x)=a _ 0+\sum _ {n=1}^{N}(a _ n\cos nx+b _ n\sin nx)\quad (x\text{ real}),
\end{equation}
where a _ 0,\dots,a _ N,b _ 0,\dots,b _ N are complex numbers. (83) can also be written in the form
\begin{equation}\label{84}
f(x)=\sum _ {-N}^{N}c _ ne^{inx}\quad (x\text{ real}),
\end{equation}
which is more convenient for most purposes. It is clear that every trigonometric polynomial is periodic, with period 2\pi.

If n is a nonzero integer, e^{inx} is the derivative of e^{inx}/in, which also has period 2\pi. Hence
\begin{equation}\label{85}
\frac{1}{2\pi}\int _ {-\pi}^{\pi}e^{inx}\, dx=
\begin{cases}
1, & n=0,\\
0, & n=\pm1,\pm2,\dots.
\end{cases}
\end{equation}

Let us multiply (84) by e^{-imx}, where m is an integer; if we integrate the product, (85) shows that
\begin{equation}\label{86}
c _ m=\frac{1}{2\pi}\int _ {-\pi}^{\pi}f(x)e^{-imx}\, dx
\end{equation}
for |m|\leqslant N. If |m|>N, the integral in (86) is 0.

The following observation can be read off from (84) and (86): The trigonometric polynomial f, given by (84), is real if and only if c _ {-n}=\bar c _ n for n=0,\dots,N.

In agreement with (84), we define a trigonometric series to be a series of the form \sum _ {-\infty}^{\infty}c _ ne^{inx} (x real); the Nth partial sum of it is defined to be the right side of (84).

If f is an integrable function on [-\pi, \pi], the numbers c _ m defined by (86) for all integers m are called the Fourier coefficients of f, and the series \sum _ {-\infty}^{\infty}c _ ne^{inx} formed with these coefficients is called the Fourier series of f.

The natural question which now arises is whether the Fourier series of f converges to f, or, more generally, whether f is determined by its Fourier series. That is to say, if we know the Fourier coefficients of a function, can we find the function, and if so, how?

The study of such series, and, in particular, the problem of representing a given function by a trigonometric series, originated in physical problems such as the theory of oscillations and the theory of heat conduction. The many difficult and delicate problems which arose during this study caused a thorough revision and reformulation of the whole theory of functions of a real variable. Among many prominent names, those of Riemann, Cantor, and Lebesgue are intimately connected with this field, which nowadays, with all its generalizations and ramifications, may well be said to occupy a central position in the whole of analysis.

We shall be content to derive some basic theorems which are easily accessible by the methods developed in the preceding chapters. For more thorough investigations, the Lebesgue integral is a natural and indispensable tool.

We shall first study more general systems of functions which share a
property analogous to (85).

Definition 11. Let \{\phi _ n\}(n=1,2,3\dots) be a sequence of complex functions on [a, b], such that
\[\int _ {a}^{b}\phi _ n(x)\overline{\phi _ m(x)}\, dx=0\quad (n\neq m).\]
Then \{\phi _ n\} is said to be an \textbf{orthogonal system of functions} on [a, b]. If, in addition,
\[\int _ {a}^{b}|\phi _ n(x)|^2\, dx=1\]
for all n, \{\phi _ n\} is said to be \textbf{orthonormal}.

For example, the functions \sqrt{2\pi}e^{inx} form an orthonormal system on [-\pi,\pi]. So do the real functions \frac{1}{\sqrt{2\pi}},\frac{\cos x}{\sqrt{\pi}},\frac{\sin x}{\sqrt{\pi}},\frac{\cos 2x}{\sqrt{\pi}},\frac{\sin 2x}{\sqrt{\pi}},\dots.

If \{\phi _ n\} is orthonormal on [a,b] and if
\begin{equation}\label{87}
c _ n=\int _ {a}^{b}f(t)\overline{\phi _ n(t)}\, dt\quad (n=1,2,3\dots),
\end{equation}
we call c _ n the nth Fourier coefficient of f relative to \{\phi _ n\}. We write f(x)\sim\sum _ {n=1}^{\infty}c _ n\phi _ n(x) and call this series the Fourier series of f(relative to \{\phi _ n\}).

Note that the symbol \sim implies nothing about the convergence of the series; it merely says that the coefficients are given by (87).

The following theorems show that the partial sums of the Fourier series of f have a certain minimum property. We shall assume here and in the rest of this chapter that f\in\mathcal{R}, although this hypothesis can be weakened.

Theorem 12. Let \{\phi _ n\} be orthonormal on [a,b]. Let
\begin{equation}\label{88}
s _ n(x)=\sum _ {m=1}^{n}c _ m\phi _ m(x)
\end{equation}
be the nth partial sum of the Fourier series of f, and suppose t _ n(x)=\sum _ {m=1}^{n}\gamma _ m\phi _ m(x). Then
\[\int _ {a}^{b}|f-s _ n|^2\, dx\leqslant\int _ {a}^{b}|f-t _ n|^2\, dx,\]
and equality holds if and only if \gamma _ m=c _ m\, (m=1,\dots,n).

That is to say, among all functions t _ n , s _ n gives the best possible mean square approximation to f.


Theorem 13. If \{\phi _ n\} be orthonormal on [a,b], and if f(x)\sim\sum _ {n=1}^{\infty} c _ n\phi _ n(x), then
\begin{equation}\label{89}
\sum _ {n=1}^{\infty}|c _ n|^2\leqslant\int _ {a}^{b}|f(x)|^2\, dx.
\end{equation}
In particular, \lim c _ n=0.

Proof. Let \int denote the integral over [a,b], \sum the sum from 1 to n. Then
\[\int f\bar t _ n=\int f\sum \bar\gamma _ m\bar\phi _ m=\sum c _ m\bar\gamma _ m\]
by the definition of \{c _ m\},
\[\int|t _ n|^2=\int t _ n\bar t _ n=\int\sum\gamma _ m\phi _ m\sum\bar\gamma _ k\bar\phi _ k=\sum|\gamma _ m|^2\]
since \{\phi _ m\} is orthonormal, and so
\begin{align*}
\int|f-t _ n|^2 & =\int|f|^2-\int f\bar t _ n-\int\bar ft _ n+\int|t _ n|^2 \\
& =\int|f|^2-\sum c _ m\bar\gamma _ m-\sum \bar c _ m\gamma _ m+\sum\gamma _ m\bar\gamma _ m \\
& =\int|f|^2-\sum|c _ m|^2+\sum|\gamma _ m-c _ m|^2,
\end{align*}
which is evidently minimized if and only if \gamma _ m=c _ m.

Putting \gamma _ m = c _ m in this calculation, we obtain \int |s _ n(x)|^2\, dx=\sum|c _ m|^2\leqslant\int|f(x)|^2\, dx.

Letting n\to\infty, we obtain (89), the so-called ``Bessel inequality".

{Trigonometric series}From now on we shall deal only with the trigonometric system. We shall consider functions f that have period 2\pi and that are Riemann-integrable on [-\pi, \pi] (and hence on every bounded interval). The Fourier series of f is then the series \sum _ {-\infty}^{\infty}c _ ne^{inx} whose coefficients c _ n are given by the integrals (86), and
\begin{equation}\label{810}
s _ N(x)=s _ N(f;x)=\sum _ {n=-N}^{N}c _ ne^{inx}
\end{equation}
is the Nth partial sum of the Fourier series of f. The inequality in the proof of Theorem theorem852 now takes the form
\begin{equation}\label{811}
\frac{1}{2\pi}\int _ {-\pi}^{\pi}|s _ N(x)|^2\, dx=\sum _ {n=-N}^{N}|c _ n|^2\leqslant\frac{1}{2\pi}\int _ {-\pi}^{\pi}|f(x)|^2\, dx.
\end{equation}

In order to obtain an expression for s _ N. that is more manageable than (810) we introduce the Dirichlet kernel
\begin{equation}\label{812}
D _ N(x)=\sum _ {n=-N}^{N}e^{inx}=\frac{\sin(N+\frac12)x}{\sin (x/2)}.
\end{equation}
The first of these equalities is the definition of D _ N(x). The second follows if both sides of the identity (e^{ix}-1)D _ N(x)=e^{i(N+1)x}-e^{-iNx} are multiplied by e^{-ix/2}.

By (86) and (810), we have
\begin{align}
s _ N(f;x) & =\sum _ {n=-N}^{N}\frac{1}{2\pi}\int _ {-\pi}^{\pi}f(t)e^{-int}\, dt\, e^{inx}= \frac{1}{2\pi}\int _ {-\pi}^{\pi}f(t)\sum _ {n=-N}^{N}e^{in(x-t)}\, dt\notag \\
& =\frac{1}{2\pi}\int _ {-\pi}^{\pi}f(t)D _ N(x-t)\, dt=\frac{1}{2\pi}\int _ {-\pi}^{\pi}f(x-t)D _ N(t)\, dt.
\label{813}
\end{align}

We shall prove just one theorem about the pointwise convergence of Fourier series.

Theorem 14. If, for some x, there are some constants \delta>0 and M<\infty such that |f(x+t)-f(x)|<M|t| for all t\in(-\delta,\delta), then \lim _ {N\to\infty}s _ N(f;x)=f(x).

Corollary 15. If f(x)=0 for all x in some segment J, then \lim s _ N(f;x)=0 for every x\in J.

Here is another formulation of this corollary:

If f(t) = g(t) for all t in some neighborhood of x, then
\[s _ N(f;x)-s _ N(g;x)=s _ N(f-g;\, x)\to0\quad \text{as}\quad N\to\infty.\]

This is usually called the localization theorem. It shows that the behavior of the sequence \{s _ N(f; x)\}, as far as convergence is concerned, depends only on the values of f in some (arbitrarily small) neighborhood of x. Two Fourier series may thus have the same behavior in one interval, but may behave in entirely different ways in some other interval. We have here a very striking contrast between Fourier series and power series (Theorem theorem815).

Proof. Define g(t)=\frac{f(x-t)-f(x)}{\sin(t/2)} for 0<|t|\leqslant\pi, and put g(0)=0. By definition, \frac{1}{2\pi}\int D _ N(x)\, dx=1. Hence (813) shows that
\begin{align*}
s _ N(f;x)-f(x) & =\frac{1}{2\pi}\int _ {-\pi}^{\pi}g(t)\sin\Big(N+\frac12\Big)t\, dt \\
& =\frac{1}{2\pi}\int _ {-\pi}^{\pi}\Big[g(t)\cos\frac{t}{2}\Big]\sin Nt\, dt+ \frac{1}{2\pi}\int _ {-\pi}^{\pi}\Big[g(t)\sin\frac{t}{2}\Big]\cos Nt\, dt.
\end{align*}
g(t)\cos(t/2) and g(t)\sin(t/2) are bounded. The last two integrals thus tend to 0 as N\to\infty.

We conclude with two other approximation theorems.

Theorem 16. If f is continuous (with period 2\pi) and if \varepsilon > 0, then there is a trigonometric polynomial P such that |P(x)-f(x)|<\varepsilon for all real x.

Proof. If we identify x and x + 2\pi, we may regard the 2\pi-periodic functions on \mathbb R^1 as functions on the unit circle T, by means of the mapping x\to e^{ix}. The trigonometric polynomials, i.e., the functions of the form
(84), form a self-adjoint algebra \mathcal{A}, which separates points on T, and which vanishes at no point of T. Since T is compact, Theorem theorem775 tells us that \mathcal{A} is dense in \mathcal{C}(T). This is exactly what the theorem asserts.

Theorem 17. Suppose f and g are Riemann-integrable functions with period 2\pi, and
\[f(x)\sim \sum _ {n=-\infty}^{\infty}c _ n e^{inx},\quad g(x)\sim \sum _ {n=-\infty}^{\infty}\gamma _ n e^{inx}.\]
Then
\begin{gather}
\lim _ {N\to\infty}\frac{1}{2\pi}\int _ {-\pi}^{\pi}|f(x)-s _ N(f;x)|^2\, dx=0,\label{814} \\
\begin{split}
\frac{1}{2\pi}\int _ {-\pi}^{\pi}f(x)\overline{g(x)}\, dx & =\sum _ {n=-\infty}^{\infty}c _ n\bar\gamma _ n, \\
\frac{1}{2\pi}\int _ {-\pi}^{\pi}|f(x)|^2\, dx & =\sum _ {n=-\infty}^{\infty}|c _ n|^2.
\end{split}\label{815}
\end{gather}

Proof. Let us use the notation
\[|h| _ 2=\Big\{\frac{1}{2\pi}\int _ {-\pi}^{\pi}|h(x)|^2\, dx\Big\}^{1/2}.\]

Lemma 18. Suppose f,g,h\in\mathcal{R}, then, |f-h| _ 2\leqslant|f-g| _ 2+|g-h| _ 2.

Lemma 19. Suppose f\in\mathcal{R} and \varepsilon>0, then there exists a continuous function g \textrm{s.t.} |f-g| _ 2<\varepsilon.

Let \varepsilon>0 be given. Since f\in\mathcal{R} and f(\pi)=f(-\pi). There is a 2\pi-periodic function h with |f-h| _ 2<\varepsilon.

By Theorem theorem854, there is a trigonometric polynomial P such that |h(x)-P(x)|<\varepsilon for all x. Hence |h-P| _ 2<\varepsilon. If P has degree N _ 0, Theorem theorem851 shows that |h-s _ N(h)| _ 2\leqslant |h-P| _ 2<\varepsilon for all N\geqslant N _ 0.

By (811), |s _ N(h)-s _ N(f)| _ 2=|s _ N(h-f)| _ 2\leqslant|h-f| _ 2<\varepsilon.

Now the triangle inequality, combined with these inequality, shows that |f-s _ N(f)| _ 2<3\varepsilon\, (N\geqslant N _ 0). This proves (814).
\[\frac{1}{2\pi}\int _ {-\pi}^{\pi}s _ N(f)\bar g\, dx=\sum _ {n=-N}^{N}c _ n\frac{1}{2\pi}\int _ {-\pi}^{\pi}e^{inx} \overline{g(x)}\, dx =\sum _ {n=-N}^{N}c _ n\bar\gamma _ n,\]
and the Schwarz inequality shows that
\[\left|\int f\bar g-\int s _ N(f)\bar g\right|\leqslant\int|f-s _ N(f)||g|\leqslant \left\{\int|f-s _ N(f)|^2\int|g|^2\right\}^{1/2},\]
which tends to 0, as N\to\infty, by (814). Then we can easily obtain (815).
6. The Gamma function This function is closely related to factorials and crops up in many unexpected places in analysis.

Our presentation will be very condensed, with only a few comments after each theorem.

Definition 20. Foo 0<x<\infty, \displaystyle \Gamma(x)=\int _ {0}^{\infty}t^{x-1}e^{-t}\, dt. The integral converges for these x. (When x<1, both 0 and \infty have to be looked at.)

Theorem 21. \phantom{null}
  1. The functional equation \Gamma(x+1)=x\Gamma(x) holds if 0<x<\infty.
  2. \Gamma(n+1)=n! for n=1,2,3,\dots.
  3. \ln \Gamma is convex on (0,\infty).

Theorem 22. If f is a positive function on (0, \infty) such that
\[f(x+1)=xf(x),\quad f(1)=1,\quad \ln f\text{ is convex,}\]
then f(x)=\Gamma(x).

Theorem 23. For all x>0,
\[\Gamma(x)=\frac{n!n^x}{x(x+1)\dots(x+n)}.\]

Theorem 24. If x>0 and y>0, then
\begin{equation}\label{816}
\int _ {0}^{1}t^{x-1}(1-t)^{y-1}\, dt=\frac{\Gamma(x)\Gamma(y)}{\Gamma(x+y)}.
\end{equation}

This integral is the so-called beta function B(x, y).

{Some consequences} The substitution t = \sin^2\theta turns (816) into
\[2\int _ {0}^{{\pi}/{2}}(\sin \theta)^{2x-1}(\cos x\theta)^{2y-1}\, d\theta=\frac{\Gamma(x)\Gamma(y)}{\Gamma(x+y)} \quad\Longrightarrow\quad\Gamma\Big(\frac12\Big)=\sqrt{\pi}.\]
The substitution t = s^2 turns the formulation of \Gamma(x) into
\[\Gamma(x)=2\int _ {0}^{\infty}s^{2x-1}e^{-s^2}\, ds\Longrightarrow\int _ {-\infty}^{+\infty}e^{-s^2}\, ds=\sqrt{\pi}.\]

The identity
\[\Gamma(x)=\frac{2^{x-1}}{\sqrt{\pi}}\Gamma\Big(\frac{x}{2}\Big)\Gamma\Big(\frac{x+1}{2}\Big)\]
follows directly from the above theorem.

Theorem 25. This provides a simple approximate expression for \Gamma(x + 1) when x is large (hence for n! when n is large). The formula is
\[\lim _ {x\to\infty}\frac{\Gamma(x+1)}{\left(\dfrac{x}{e}\right)^x\sqrt{2\pi x}}=1.\]

Proof. All omitted here.


评论

发表回复

您的电子邮箱地址不会被公开。 必填项已用 * 标注