You can read the LaTeX document online (for the latest updated chapters) from the link: https://gaomj.cn/pdfjs/web/viewer.html?file=Manifold.pdf
Chapter 6: Differential Forms
Contents
Contents
1. Symmetric and alternating tensors
1.1. Multilinear algebra
1.2. Alternating tensors
1.3. Wedge products
2. Differential forms on manifolds
3. Exterior derivatives
Differential forms are generalizations of real-valued functions on a manifold. A differential k-form assigns to each point a k-covector, instead of a number, on its tangent space. As is mentioned earlier, differential 1-forms are just covector fields.
In this chapter we will study differential forms from the tensor point of view, so some fundamental knowledge of tensors is introduced.
1. Symmetric and alternating tensors
We have seen how linear algebra is used to manifolds in previous chapters. The differential of a smooth map is a linear map so we can have lots of consequences from it. However there are many situations in which multilinear maps are the interested objects. This leads to the introduction of tensors.
Tensors on a vector space are multilinear generalizations of covectors. Abstractly speaking, they are tensor product of the dual vector space with itself. Alternatively, they are real-valued multilinear functions of several vectors. We will consider two special classes of tensors called symmetric tensors and alternating tensors.
1.1. Multilinear algebra Suppose V1,\dots,V _ k,W are vector spaces. A map F:V _ 1\times\dots\times V _ k\to W is a multilinear map if it is linear with respect to each variable while the others are fixed. If k=2 this F is said to be bilinear. The set of all multilinear maps form a vector space under the usual operations of pointwise addition and scalar multiplication, denoted by \mathcal L(V _ 1,\dots, V _ k;W).
If F\in\mathcal L(V _ 1,\dots, V _ k;\mathbb R) and G\in\mathcal L(W _ 1,\dots,W _ l;\mathbb R). We define a tensor product of F and G, denoted by F\otimes G, by
\begin{align*}
F\otimes G: \prod V _ i\times\prod W _ j & \to\mathbb R \\
(v _ 1,\dots,v _ k,w _ 1,\dots,w _ l) & \mapsto F(v _ 1,\dots,v _ k)G(w _ 1,\dots,w _ l).
\end{align*}
Clearly, F\otimes G\in\mathcal L(V _ 1,\dots,V _ k,W _ 1,\dots,W _ l;\mathbb R). What we are concerned here is the k-fold tensor product of V^\ast, denoted by T^k(V^\ast), that is,
\[
T^{k}(V^{*})=\underbrace{V^{\ast} \otimes \cdots \otimes V^{\ast}} _ {k}.
\]The element in it is called a covariant k-tensor on V.
A covariant k-tensor \alpha on V is said to be symmetric if its value is unchanged by interchanging any pair of arguments:
\[
\alpha(\dots,v _ i,\dots,v _ j,\dots)=\alpha(\dots,v _ j,\dots,v _ i,\dots).
\]And it is said to be alternating, or antisymmetric, if it changes sign whenever two of its arguments are interchanged:
\[
\alpha(\dots,v _ i,\dots,v _ j,\dots)=-\alpha(\dots,v _ j,\dots,v _ i,\dots).
\]Alternating covariant k-tensors are also called exterior forms, multicovectors, or k-covectors. The subspace of all alternating k-tensors on V is denoted by \Lambda^k(V^\ast)\subseteq T^k(V^\ast).
An excellent example of alternating covariant k-tensors is the determinant, as a multilinear function of n vectors.
To analyze alternating tensors, let's recall some knowledge of permutations. A permutation is simply a bijective map, and the set of all permutations of a given set is a group under composition. We denote the group of permutations of \{1,\dots,n\} by S _ n and call it the symmetric group on n elements. We can define the sign of a permutation \sigma\in S _ n, which is based on the inversion count. Equivalently, \operatorname{sign}(\sigma)=+1 if \sigma is even, meaning it can be written as a composition of an even number of transpositions; and \operatorname{sign}(\sigma)=-1 if \sigma is odd.
1.2. Alternating tensors
The following proposition contains some characterizations of covariant k-tensors \alpha. The proof is not difficult.
- \alpha is alternating.
- For any permutation \sigma\in S _ k,
\[
\alpha(v _ {\sigma(1)},\dots,v _ {\sigma(k)})=(\operatorname{sign}\sigma )\alpha(v _ 1,\dots,v _ k).
\] - \alpha(v _ 1,\dots,v _ k)=0 whenever v _ 1,\dots,v _ k is linearly dependent.
- Whenever two of the arguments are equal, \alpha(v _ 1,\dots,v _ k)=0.
We now define an operation (which is actually a projection), called alternation, or antisymmetrization \operatorname{Alt}:T^k(V^\ast)\to\Lambda^k(V^\ast), by
\[
(\operatorname{Alt}\alpha)(v _ 1,\dots,v _ k)=\frac{1}{k!}\sum _ {\sigma\in S _ k}(\operatorname{sign}\sigma)\alpha(v _ {\sigma(1)},\dots,v _ {\sigma(k)}).
\]
It can be proved that after the operation of alternation we obtain an alternating tensor. Let \tau\in S _ k. Note that the sign of the composition of two permutations is the product of the signs of the two, and that \operatorname{sign}^2(\tau)=1. We have
\begin{align*}
&\operatorname{Alt} \alpha(v _ {\tau(1)}, \ldots, v _ {\tau(k)})
=\frac{1}{k!} \sum _ {\sigma \in S _ {k}} \operatorname{sign}(\sigma) \alpha(v _ {\sigma \circ \tau(1)}, \ldots, v _ {\sigma \circ \tau(k)}) \\
={}&\frac{1}{k!} \sum _ {\eta \in S _ {k}} \frac{\operatorname{sign}(\eta)}{\operatorname{sign}(\tau)} \alpha(v _ {\eta(1)}, \ldots, v _ {\eta(k)})
=\operatorname{sign}(\tau) \operatorname{Alt}(\alpha)(v _ {1}, \ldots, v _ {k}).
\end{align*}Here we have made the substitution \eta=\sigma\circ\tau. Thus we conclude:
\[
\varepsilon^I(v _ 1,\dots,v _ k)=\det
\begin{bmatrix}
\varepsilon^{i _ 1}(v _ 1) & \cdots & \varepsilon^{i _ 1}(v _ k) \\
\vdots & \ddots & \vdots \\
\varepsilon^{i _ k}(v _ 1) & \cdots & \varepsilon^{i _ k}(v _ k)
\end{bmatrix}=\det
\begin{bmatrix}
v _ 1^{i _ 1} & \cdots & v _ k^{i _ 1} \\
\vdots & \ddots & \vdots \\
v _ 1^{i _ k} & \cdots & v _ k^{i _ k}
\end{bmatrix}.
\] where we have write v _ j=v _ j^iE _ i in terms of basis (E _ i) dual to (\varepsilon^i).
For example, in terms of standard dual basis for (\mathbb R^3)^\ast, we can simply write \varepsilon^{13}(v,w)=v^1w^3-w^1v^3.
The following proposition provides us with some properties of elementary k-covectors. All of them are easy to check.
\[
\varepsilon^I(E _ {j _ 1},\dots,E _ {j _ k})=\delta _ J^I:=\det
\begin{bmatrix}
\delta _ {j _ 1}^{i _ 1} & \cdots & \delta _ {j _ k}^{i _ 1} \\
\vdots & \ddots & \vdots \\
\delta _ {j _ 1}^{i _ k} & \cdots & \delta _ {j _ k}^{i _ k}
\end{bmatrix}.
\]
Using elementary k-covectors, we can obtain a basis for \Lambda^k(V^\ast). The trick is to pick up the increasing multi-index I=(i _ 1,\cdots,i _ k) such that i_1<\dots<i_k.
\[
\mathcal E=\{\varepsilon^I\mid i _ 1< \cdots < i _ k\}.
\]Therefore
\[
\dim\Lambda^k(V^\ast)=\binom{n}{k}=\frac{n!}{k!(n-k)!}.
\]If k>n, then the dimension is 0.
Proof. If k>n the statement is obvious since every k vectors in V are linearly dependent. If k\leqslant n, first we check the linear independence. Suppose \sum'\lambda _ I\varepsilon^I=0 where \sum' denotes the sum over increasing multi-indices. We will prove any of the coefficients \lambda _ I is zero. Let J be an increasing multi-index and evaluate \sum'\lambda _ I\varepsilon^I to (E _ {j _ 1},\dots,E _ {j _ k}). By the previous proposition we get 0=\sum'\delta _ J^I=\lambda _ J.
Now suppose \alpha\in \Lambda^k(V^\ast). We need to find some \lambda _ I for each increasing multi-index such that \alpha=\sum'\lambda _ I\varepsilon^I. We evaluate the right hand side on (E _ {j _ 1},\dots,E _ {j _ k}) where J=(j _ 1,\dots,j _ k) is any one of the multi-index, and then by the previous proposition we get \sum'\lambda _ I\varepsilon^I(E _ {j _ 1},\dots,E _ {j _ k})=\sum'\lambda _ I\delta^I _ J.
Given a multi-index J, we find the value of \delta^I _ J for every increasing multi-index I. If there is a repeated index in J then \delta^I _ J=0. Suppose the indices in J are all different. If there is a unique increasing multi-index I such that J is a permutation of I', i.e., J=I' _ \sigma for a \sigma\in S _ k. For this I', it can be checked that
\[\delta^{I'} _ J=\operatorname{sign}\sigma.\]For any other increasing multi-index I the value \delta^I _ J is zero. Therefore
\[
\sum\nolimits'\lambda _ I\varepsilon^I(E _ {j _ 1},\dots,E _ {j _ k})=\sum\nolimits'\lambda _ I\delta^I _ J=(\operatorname{sign}\sigma)\lambda _ {I'}.
\]
On the other hand, we evaluate \alpha on (E _ {j _ 1},\dots,E _ {j _ k}). If J has a repeated index then \alpha(E _ {j _ 1},\dots,E _ {j _ k})=0 since \alpha\in \Lambda^k(V^\ast). Otherwise, there is a unique increasing multi-index I' such that J=I' _ \sigma, so \alpha(E _ {j _ 1},\dots,E _ {j _ k})=(\operatorname{sign}\sigma)\alpha(E _ {i' _ 1},\dots,E _ {i' _ k}).
Combining the results, if we set \lambda _ I=\alpha(E _ {i _ 1},\dots,E _ {i _ k}), then we always have \sum'\lambda _ I\varepsilon^I(E _ {j _ 1},\dots,E _ {j _ k})=\alpha(E _ {j _ 1},\dots,E _ {j _ k}) for any multi-index J. This means \sum'\lambda _ I\varepsilon^I=\alpha, indicating \mathcal E spans \Lambda^k(V^\ast). The proof is done.
\[
\omega(Tv _ 1,\dots,Tv _ n)=(\det T)\omega(v _ 1,\dots,v _ n).
\]
Proof. Let (E _ i) be any basis for V and (\varepsilon^i) be the dual basis. Let T _ i=TE _ i=T _ i^jE _ j. Since \Lambda^n(V^\ast) is one dimensional, only \omega=\varepsilon^{1\dots n} need to consider. By multilinearity, it suffices to consider all v _ i are basis vectors. By interchanging finite pairs of arguments, we only need to consider the case in which v _ i=E _ i. By Definition 3,
\[
\varepsilon^{1\dots n}(T _ 1,\dots,T _ n)=\det(T _ i^j) _ {n\times n}=\det T=(\det T)\varepsilon^{1\dots n}(E _ 1,\dots,E _ n).
\]
1.3. Wedge products
Now we define a product operation for alternating tensors, called wedge product. Let V be a finite dimensional vector space.
\begin{align*}
\wedge : \Lambda^k(V^\ast)\times\Lambda^l(V^\ast)&\to\mathbb R\\
\omega\wedge\eta&\mapsto\omega\wedge\eta=\frac{(k+l)!}{k!l!}\operatorname{Alt}(\omega\otimes\eta).
\end{align*}Wedge product can also be called exterior product,
In this definition there is a coefficient for the alternation. Taking it into the definition we can have the following simple relation:
\[
\varepsilon^I\wedge\varepsilon^J=\varepsilon^{IJ}.
\]
Proof. We only need to consider all the arguments are one of the basis vectors: \varepsilon^I\wedge\varepsilon^J(E _ {p _ 1},\dots,E _ {p _ {k+l}})=\varepsilon^{IJ}(E _ {p _ 1},\dots,E _ {p _ {k+l}}). If there is a repeated index, by the basic property of alternating tensors both \varepsilon^I\wedge\varepsilon^J and \varepsilon^{IJ} take zero value. If one of the p is not in either I or J. By Proposition 4 (3), \varepsilon^{IJ} takes zero value. Also, \varepsilon^I or \varepsilon^J takes zero value, implying \varepsilon^I\otimes\varepsilon^J=0, so \varepsilon^I\wedge\varepsilon^J=0.
Hence, we only need to consider P=(p _ 1,\dots,p _ {k+l}) is a permutation of IJ and has no repeated indices. Since interchanging any pair of the indices affects both \varepsilon^I\wedge\varepsilon^J and \varepsilon^{IJ} simultaneously by taking the negative sign, only one case needs to consider: P=IJ and P has no repeated indices. By Proposition 4 (3), \varepsilon^{IJ} takes value 1. We need to show \varepsilon^I\wedge\varepsilon^J(E _ {p _ 1},\dots,E _ {p _ {k+l}})=1. By definition,
\begin{align*}
&\varepsilon^I\wedge\varepsilon^J(E _ {p _ 1},\dots,E _ {p _ {k+l}})=\frac{(k+l)!}{k!l!}\operatorname{Alt}(\varepsilon^I\otimes\varepsilon^J)(E _ {p _ 1}, \dots,E _ {p _ {k+l}}) \\
={} & \frac{1}{(k+l)!}\sum _ {\sigma\in S _ {k+l}}(\operatorname{sign}\sigma)\varepsilon^I(E _ {p _ {\sigma(1)}},\dots,E _ {p _ {\sigma(k)}})\varepsilon^J (E _ {p _ {\sigma(k+1)}},\dots,E _ {p _ {\sigma(k+l)}}).
\end{align*}
The only terms in the sum above that take nonzero values are those in which \sigma permutes the first k indices and the last l indices of P separately. Therefore, \sigma must can be written as \sigma=\tau\eta, where \tau\in S _ k acts by permuting \{1,\dots,k\} and \eta\in S _ l acts by permuting \{k+1,\dots,k+l\}. We have \operatorname{sign}(\tau\eta)=\operatorname{sign}(\tau)\operatorname{sign}(\eta), and
\begin{align*}
& \varepsilon^I\wedge\varepsilon^J(E _ {p _ 1},\dots,E _ {p _ {k+l}}) \\
={} & \frac{1}{k!l!} \sum _ {\substack{\tau \in S _ {k} \\
\eta \in S _ {l}}}(\operatorname{sign} \tau)(\operatorname{sign} \eta) \varepsilon^{I}(E _ {p _ {\tau(1)}}, \dots, E _ {p _ {\tau(k)}}) \\
&\hphantom{\frac{1}{k!l!} \sum _ {\substack{\tau \in S _ {k} \\
\eta \in S _ {l}}}(\operatorname{sign} \tau)(\operatorname{sign} \eta)}\cdot\varepsilon^{J}(E _ {p _ {k+\eta(1)}}, \dots, E _ {p _ {k+\eta(l)}}) \\
={} & \frac{1}{k!} \sum _ {\tau \in S _ {k}}(\operatorname{sign} \tau) \varepsilon^{I}(E _ {p _ {\tau(1)}}, \dots, E _ {p _ {\tau(k)}}) \\
& \cdot\frac{1}{l!} \sum _ {\eta \in S _ {l}}(\operatorname{sign} \eta) \varepsilon^{J}(E _ {p _ {k+\eta(1)}}, \dots, E _ {p _ {k+\eta(l)}}) \\
={} & (\operatorname{Alt} \varepsilon^{I})(E _ {p _ {1}}, \dots, E _ {p _ {k}})(\operatorname{Alt} \varepsilon^{J})(E _ {p _ {k+1}}, \ldots, E _ {p _ {k+l}}) \\
={} & \varepsilon^{I}(E _ {p _ {1}}, \dots, E _ {p _ {k}}) \varepsilon^{J}(E _ {p _ {k+1}}, \dots, E _ {p _ {k+l}})=1 .
\end{align*}Here we have used the fact that the alternation of an alternating tensor is the tensor itself, which can be easily checked by definition.
The following propositions give some more properties of wedge products.
\[\omega\wedge(\eta\wedge\xi)=(\omega\wedge\eta)\wedge\xi.\]
\[
\omega\wedge\eta=(-1)^{kl}\eta\wedge\omega.
\]
\[
\omega^1\wedge\dots\wedge\omega^k(v _ 1,\dots,v _ k)=\det(\omega^j(v _ i)).
\]
We now prove these properties. For associativity, it is because wedge product is bilinear and for elementary alternating covectors
\[
(\varepsilon^{I} \wedge \varepsilon^{J}) \wedge \varepsilon^{K}=\varepsilon^{I J} \wedge \varepsilon^{K}=\varepsilon^{I J K}=\varepsilon^{I} \wedge \varepsilon^{J K}=\varepsilon^{I} \wedge(\varepsilon^{J} \wedge \varepsilon^{K}).
\]For anticommutativity, it is because wedge product is bilinear and for elementary alternating covectors
\[
\varepsilon^I\wedge\varepsilon^J=\varepsilon^{IJ}=(-1)^{kl}\varepsilon^{JI}=(-1)^{kl}\varepsilon^J\wedge\varepsilon^I.
\]Finally, if all covectors \omega _ 1,\dots,\omega _ k are basis covectors. By Proposition 8 and Definition 3, \omega^1\wedge\dots\wedge\omega^k(v _ 1,\dots,v _ k)=\det(\omega^j(v _ i)) holds. Since both sides are multilinear in (\omega^1,\dots,\omega^k), the general case follows.
2. Differential forms on manifolds
We can now turn our attention back to a smooth manifold M. For every point p\in M, there are alternating k-tensors on T _ p^\ast M, constituting \Lambda^k(T _ p^\ast M). For all such tensors, we denote the set
\[
\Lambda^kT^\ast M=\coprod _ {p\in M}\Lambda^k(T _ p^\ast M).
\]
We are now ready for the definition of differential forms.
Conventionally, a 0-tensor is just a real number, as it can be viewed as a function depending multilinearly on no vectors! Therefore a 0-form is a function defined on M.
The wedge product of differentials, say \omega and \eta, is defined pointwise: (\omega\wedge\eta) _ p=\omega _ p\wedge\eta _ p. We can know the wedge product of a k-form and an l-form is a (k+l)-form. If f is a 0-form and \omega is a k-form, the wedge product f\wedge\omega is just f\omega.
In any smooth chart, suppose the dual basis (\mathrm dx^i) for (\frac{\partial}{\partial x^i}). By proposition 5, a k-form locally has the form
\[
\omega=\sum\nolimits'\omega _ I\mathrm dx^{i _ 1}{\wedge}{\cdots}{\operatorname{\wedge}{}}\mathrm dx^{i _ k}:=\sum\nolimits'\omega _ I\, \mathrm dx^I,
\]in which \omega _ I are the coefficient functions and \sum' still denotes the sum over increasing multi-indices I. By proposition 4 (3), we have
\[
\omega _ I=\omega\Big(\frac{\partial}{\partial x^{i _ 1}},\dots,\frac{\partial}{\partial x^{i _ k}}\Big).
\]
Pullbacks of differential forms
We have defined the pullback for covector fields. Similarly, we can define the pullback of a differential forms.
\[
(F^\ast\omega) _ p(v _ 1,\dots,v _ k)=\omega _ {F(p)}(\mathrm dF _ p(v _ 1),\dots,\mathrm dF _ p(v _ k)).
\]
This F^\ast is a linear map, just like the pullback of a covector field. Moreover,
\[
F^\ast(\omega\wedge\eta)=(F^\ast\omega)\wedge(F^\ast\eta).
\]
\begin{align*}
& F^\ast\Big(\sum\nolimits'\omega _ I\mathrm dy^{i _ 1}{\wedge}{\cdots}{\operatorname{\wedge}{}}\mathrm dy^{i _ k}\Big) \\
={} & \sum\nolimits'(\omega _ I\circ F)\mathrm d(y^{i _ 1}\circ F)\wedge\cdots\wedge\mathrm d(y^{i _ k}\circ F).
\end{align*}
The first is proved directly from definition. The second is a generalization of Proposition 6 in Chapter 5 and its proof is quite similar.
As in the case of covector fields, we can calculate the pullback of a differential form in a quite simple way. For example, define a function F(u,v)=(u,v,u^2-v^2) and a 2-form \omega=y\, \mathrm dx\wedge\mathrm dz on \mathbb R^3. The pullback is F^\ast\omega=v\, \mathrm du\wedge\mathrm d(u^2-v^2)=\dots
Consider a special case in which the map is identity map. For a differential form \omega, it is easy to see (\mathbf 1 _ M)^\ast\omega=\omega, indicating (\mathbf 1 _ M)^\ast=\mathbf 1. If we use different coordinate maps in the domain and codomain, then we obtain the transformation of differential forms between different charts. To be more specific, let (\mathbb R^2,(x,y)) and (\mathbb R^2,(r,\theta)) be two charts of \mathbb R^2. The transition function (r,\theta)\mapsto(x,y) of the identity map is given by x=r\cos \theta and y=r\sin \theta. For a differential form \omega=\mathrm dx\wedge\mathrm dy in the codomain,
\[
\mathrm dx\wedge\mathrm dy=\omega=(\mathbf 1 _ M)^\ast\omega=\mathrm d(r\cos\theta)\wedge\mathrm d(r\sin\theta)=r\, \mathrm dr\wedge\mathrm d\theta.
\]
In this example we calculate the transformation of n-forms for an n-dimensional manifold. For such forms we have the following pullback formula.
\[
F^\ast(u\mathrm dy^{1}{\wedge}{\cdots}{\operatorname{\wedge}{}}\mathrm dy^{n})=(u\circ F)(\det JF)\mathrm dx^{1}{\wedge}{\cdots}{\operatorname{\wedge}{}}\mathrm dx^{n},
\]where JF is represents the Jacobian of F in these coordinates.
\[
\mathrm d\tilde x^{1}{\wedge}{\cdots}{\operatorname{\wedge}{}}\mathrm d\tilde x^{n}=\det\Big(\frac{\partial \tilde x^j}{\partial x^i}\Big)\mathrm dx^{1}{\wedge}{\cdots}{\operatorname{\wedge}{}}\mathrm dx^{n}.
\]
Proof. It suffices to show the result holds when evaluated on (\frac{\partial}{\partial x^i}). By the previous proposition and Proposition 11,
\begin{align*}
& F^\ast(u\mathrm dy^{1}{\wedge}{\cdots}{\operatorname{\wedge}{}}\mathrm dy^{n})\Big(\frac{\partial}{\partial x^1},\dots,\frac{\partial}{\partial x^n}\Big) \\
={} & (u\circ F)\mathrm dF^{1}{\wedge}{\cdots}{\operatorname{\wedge}{}}\mathrm dF^{n}\Big(\frac{\partial}{\partial x^1},\dots,\frac{\partial}{\partial x^n}\Big) \\
={} & (u\circ F)\det\Big[\mathrm dF^j\Big(\frac{\partial}{\partial x^i}\Big)\Big]=(u\circ F)\det\Big(\frac{\partial F^j}{\partial x^i}\Big)\\
={}& (u\circ F)\det\Big(\frac{\partial F^j}{\partial x^i}\Big)\mathrm dx^{1}{\wedge}{\cdots}{\operatorname{\wedge}{}}\mathrm dx^{n}\Big(\frac{\partial}{\partial x^1},\dots,\frac{\partial}{\partial x^n}\Big).
\end{align*}
The corollary follows by setting F to be the identity map of U\cap\widetilde U, but using coordinates (x^i) in the domain and (\tilde x^i) in the codomain.
3. Exterior derivatives The differential of a function is a covector field. In other words, we defined an operator on a 0-form and the result is a 1-form. Now we generalize the notion on differential forms, called exterior derivative, the most important operation on differential forms. It is also a generalization of the gradient, divergence and curl in vector calculus in \mathbb R^3.
We will consider smooth differential forms. The smoothness can be defined in the usual way. However for simplicity we only consider an equivalent definition: a differential form \omega is smooth if in any chart w=\sum'\omega _ I \, \mathrm dx^I satisfies that the coefficients \omega _ I are all smooth functions. The set of all smooth k-forms is denoted by \Omega^k(M).
It is helpful to consider exterior derivatives in the Euclidean space first.
\[
\mathrm d\Big(\sum\nolimits'\omega _ I\, \mathrm dx^I\Big)=\sum\nolimits'\mathrm d\omega _ I\wedge\mathrm dx^I,
\]where \mathrm d\omega _ I is the differential of \omega _ I that we have defined.
Explicitly, the definition is
\[
\mathrm d\Big(\sum\nolimits'\omega _ I\, \mathrm dx^I\Big)=\sum _ {i _ 1<\dots<i _ k}\sum _ {j=1}^n\frac{\partial\omega _ I}{\partial x^j}\mathrm dx^j{\operatorname{\wedge}{}}\mathrm dx^{i _ 1}{\wedge}{\cdots}{\operatorname{\wedge}{}}\mathrm dx^{i _ k}.
\]
The exterior derivative is defined in such a way that it has some useful properties.
\[
\mathrm d(\omega\wedge\eta)=\mathrm d\omega \wedge\eta+(-1)^k\omega\wedge\mathrm d\eta.
\]
\[
F^\ast(\mathrm d\omega)=\mathrm d(F^\ast\omega).
\]
Proof. For Proposition 19 (2), by linearity it suffices to consider terms of the form \omega=u\, \mathrm dx^I and \eta=v\, \mathrm dx^J. It is straightforward to verify \mathrm d(u\, \mathrm dx^I)=\mathrm du\wedge\mathrm dx^I for any multi-index I because after interchanging any pair of x^i both sides are multiplied by (-1) and it narrows down to the cases in which the multi-index is increasing or there is a repeated index. Hence,
\begin{align*}
\mathrm d(\omega \wedge \eta) & =\mathrm d ( (u\,\mathrm d x^{I} ) \wedge (v\,\mathrm d x^{J} ) )
=\mathrm d (u v\,\mathrm d x^{I} {\operatorname{\wedge}{}}\mathrm d x^{J} ) \\
& =(v \, \mathrm d u+u\, \mathrm d v) \wedge\mathrm d x^{I} \wedge\mathrm d x^{J} \\
& = (\mathrm d u \wedge\mathrm d x^{I} ) \wedge (v\, \mathrm d x^{J} )+(-1)^{k} (u\,\mathrm d x^{I} ) \wedge (\mathrm d v \wedge\mathrm d x^{J} ) \\
& =\mathrm d \omega \wedge \eta+(-1)^{k} \omega \wedge\mathrm d \eta.
\end{align*}For Proposition 19 (3), we first verify it to a 0-form. Let u be a function, then
\begin{align*}
\mathrm d(\mathrm d u) & =\mathrm d\Big(\frac{\partial u}{\partial x^{j}}\, \mathrm d x^{j}\Big)=\frac{\partial^{2} u}{\partial x^{i} \partial x^{j}}\, \mathrm d x^{i} \wedge\mathrm d x^{j} \\
& =\sum _ {i<j}\Big(\frac{\partial^{2} u}{\partial x^{i} \partial x^{j}}-\frac{\partial^{2} u}{\partial x^{j} \partial x^{i}}\Big)\,\mathrm d x^{i} \wedge\mathrm d x^{j}=0 .
\end{align*}The general case follows immediately by definition and the basic property of wedge product.
For Proposition 20, it suffices to consider \omega=u\, \mathrm dx^{i _ 1}{\wedge}{\cdots}{\operatorname{\wedge}{}}\mathrm dx^{i _ k}. We calculate both sides and compare:
\begin{align*}
F^{\ast}(\mathrm d(u\, \mathrm dx^{i _ 1}{\wedge}{\cdots}{\operatorname{\wedge}{}}\mathrm dx^{i _ k}))&=F^\ast(\mathrm du\wedge \mathrm dx^{i _ 1}\wedge\cdots\wedge \mathrm dx^{i _ k})\\
&=\mathrm d(u\circ F)\wedge \mathrm d(x^{i _ {1}}\circ F)\wedge\cdots\wedge \mathrm d(x^{i _ {k}}\circ F),\\
\mathrm d(F^{\ast}(u\, \mathrm dx^{i _ 1}{\wedge}{\cdots}{\operatorname{\wedge}{}}\mathrm dx^{i _ k}))&=\mathrm d((u\circ F)\, \mathrm d(x^{i _ {1}}\circ F)\wedge\cdots\wedge \mathrm d(x^{i _ {k}}\circ F))\\
&=\mathrm d(u\circ F)\wedge \mathrm d(x^{i _ {1}}\circ F)\wedge\cdots\wedge \mathrm d(x^{i _ {k}}\circ F),
\end{align*}so they are equal.
Exterior derivatives on manifolds
We can now consider the case in manifolds. Let M be a manifold and (U,\varphi) be a chart. Then for any \omega\in\Omega^k(M), the pullback \varphi^{-1\ast}\omega is a k-form in \mathbb R^n, which has been defined just now. This suggests a definition using pullbacks. In order to define exterior derivatives on a manifold, we need one more lemma on pullbacks.
\[
(G\circ F)^\ast=F^\ast\circ G^\ast.
\]
Proof. Let p\in M and v _ 1,\dots,v _ k\in T _ pM. We need to show ((G\circ F)^\ast\omega) _ p(v _ 1,\dots,v _ k)=(F^\ast\circ G^\ast) _ p(v _ 1,\dots,v _ k). For simplicity in notation, we omit the subscripts of p, F(p) or G(F(p)), and the k-tuples in the form such as (v _ 1,\dots,v _ k) will only be written simply like (v _ i). Then by the chain rule of differentials,
\[
((G\circ F)^\ast\omega)(v _ i)=\omega(\mathrm d(G\circ F)(v _ i))=\omega((\mathrm dG\circ\mathrm dF)(v _ i)).
\]On the other hand,
\[
((F^\ast\circ G^\ast)\omega)(v _ i)=(F^\ast(G^\ast\omega))(v _ i)=(G^\ast \omega)(\mathrm dF(v _ i))=\omega(\mathrm dG(\mathrm dF(v _ i))).
\]This lemma is proved.
We want to define \mathrm d\omega in terms of coordinate map. By Proposition 20 and the lemma, a possible definition is
\[
\mathrm d\omega=\varphi^\ast\mathrm d(\varphi^{-1\ast}\omega).
\]To make this definition well-defined, we have to ensure it is independent of the choice of the chart. Let (V,\psi) be another chart. Then \varphi\circ\psi^{-1} is a smooth function, so
\begin{align*}
(\varphi\circ\psi^{-1})^\ast\, \mathrm d(\varphi^{-1\ast}\omega) & =\mathrm d((\varphi\circ\psi^{-1})^\ast(\varphi^{-1\ast}\omega)) \\
(\psi^{-1\ast}\circ\varphi^\ast)\, \mathrm d(\varphi^{-1\ast}\omega) & =\mathrm d((\psi^{-1\ast}\circ\varphi^\ast)(\varphi^{-1\ast}\omega)) \\
\varphi^\ast\, \mathrm d(\varphi^{-1\ast}\omega) & =\psi^\ast\mathrm d(\psi^{-1\ast}\omega).
\end{align*}Thus \mathrm d\omega is well-defined.
Such an operator is unique in the following sense, and we take such a unique operator to be our definition for exterior derivatives.
- The map \mathrm d is linear.
- If \omega\in \Omega^k(M) and \eta\in\Omega^l(M), then
\[
\mathrm d(\omega\wedge\eta)=\mathrm d\omega \wedge\eta+(-1)^k\omega\wedge\mathrm d\eta.
\] - The composition \mathrm d\circ\mathrm d=0.
- For f\in\Omega^0(M)=C^\infty(M), \mathrm df is the differential of f, that is, for a smooth vector field X, \mathrm df(X)=Xf.
\[
\mathrm d\Big(\sum\nolimits'\omega _ I\, \mathrm dx^I\Big)=\sum\nolimits'\mathrm d\omega _ I\wedge\mathrm dx^I.
\]
Proof. It is straightforward to verify the operator we defined by \mathrm d\omega=\varphi^\ast\mathrm d(\varphi^{-1\ast}\omega) is an exterior derivative. The existence is proved.
We prove the uniqueness. Suppose D is any exterior derivative. This D need to coincide with \mathrm d that we have defined. Clearly for a vector field X, Df(X)=Xf=\mathrm df(X) for all f\in \Omega^0(M), by definition.
It can be seen that D is a "local operator", that is, D\omega is determined locally: If \omega _ 1=\omega _ 2 on a neighborhood V of p, then D\omega _ 1| _ p=D\omega _ 2| _ p. Specifically, let \eta=\omega _ 1-\omega _ 2 and \psi be a bump function that is equal to 1 on some neighborhood of p and supported in V. Then \psi\eta=0, implying 0=D(\psi\eta)=D\psi\wedge\eta+\psi D\eta. Evaluating this at p and using \psi(p)=1 with D\psi _ p=\mathrm d\psi _ p=0, we have D\omega _ 1| _ p=D\omega _ 2| _ p, so D\omega _ 1=D\omega _ 2 on V.
Let (U,(x^i)) be a chart containing p and \omega=\sum'\omega _ I\, \mathrm d x^I on U. For any function f we have D\, \mathrm df=\mathrm (\mathrm d\circ \mathrm d)f=0 .
By means of a bump function we can construct global smooth functions \tilde\omega _ I and \tilde{x}^i on M that agree with \omega _ I and x^i in a neighborhood of p. Consider a k-form \tilde\omega=\sum'\tilde\omega _ I\, \mathrm d\tilde x^I. From the facts that D\, \mathrm d\tilde x^I=0 by the property of D, and that D,\mathrm d are local operators, by evaluating at p we have
\begin{align*}
(D\omega) _ {p}&=(D\tilde{\omega}) _ p=\Big(D\sum\nolimits'\tilde{\omega} _ I\, \mathrm d\tilde{x}^I\Big) _ p\\
&=\Big(\sum\nolimits' D\tilde{\omega} _ I\wedge \, \mathrm d\tilde{x}^I+\sum\nolimits'\tilde{\omega} _ I\wedge D\, \mathrm d\tilde{x}^I\Big) _ p\\
&=\Big(\sum\nolimits' D{\omega} _ I\wedge \, \mathrm d{x}^I\Big) _ p=\Big(\sum\nolimits' \mathrm d{\omega} _ I\wedge \, \mathrm d{x}^I\Big) _ p \\
&=(\mathrm d\omega) _ p.
\end{align*}Thus the uniqueness is proved, and the exterior derivative is clearly determined by
\[
\mathrm d\Big(\sum\nolimits'\omega _ I\, \mathrm dx^I\Big)=\sum\nolimits'\mathrm d\omega _ I\wedge\mathrm dx^I.
\]
\[
F^\ast(\mathrm d\omega)=\mathrm d(F^\ast\omega).
\]
In order to prove this proposition, we try to make use of Proposition 20. Let (U,\varphi) and (V,\psi) be two charts for M,N respectively. Then
\begin{align*}
F^{\ast }(\mathrm d\omega)&=F^\ast \psi^\ast \, \mathrm d (\psi^{-1\ast }\omega )=\varphi^\ast \circ (\psi\circ F\circ\varphi^{-1} )^\ast \, \mathrm d (\psi^{-1\ast }\omega )
\\&=\varphi^\ast \, \mathrm d ( (\psi\circ F\circ\varphi^{-1} )^\ast \psi^{-1\ast }\omega )
=\varphi^\ast \, \mathrm d (\varphi^{-1\ast }F^\ast \omega )\\
&=\mathrm d(F^\ast \omega).
\end{align*}
There are some terminologies used customarily. An \omega\in\Omega^k(M) is said to be closed if \mathrm d\omega=0, and exact if there is a (k-1)-form \eta such that \omega=\mathrm d\eta. Every exact form is closed by \mathrm d^2=0.
Finally, it is worth noting that there is an alternative proof using a useful invariant formula to show us the existence, uniqueness and properties of \mathrm d. However it is very inconvenient for computation and to work with, so we will not cover it too much here. The formula for 1-forms is simple and can be stated here.
\[
\mathrm d\omega(X,Y)=X(\omega(Y))-Y(\omega(X))-\omega([X,Y]).
\]
It suffices to verify the case \omega=u\, \mathrm du. On the one hand,
\begin{align*}
\mathrm d\omega(X,Y) & =\mathrm du\wedge\mathrm dv(X,Y)=\mathrm du(X)\mathrm dv(Y)-\mathrm dv(X)\mathrm du(Y) \\
& =XuYv-XvYu.
\end{align*}On the other hand,
\begin{align*}
& X(\omega(Y))-Y(\omega(X))-\omega([X,Y])\\={}&X(u\, \mathrm dv(Y))-Y(u\, \mathrm dv(X))-u\, \mathrm dv([X,Y]) \\
={} & X(uYv)-Y(uXv)-u[X,Y]v \\
={} & (XuYv+uXYv)-(YuXv+uYXv)-u(XYv-YXv).
\end{align*}Therefore they are equal.
发表回复