104

Actually I have several related questions, not worth opening different threads:

  1. What is the exterior derivative intuitively? What is its geometric meaning? A possible answer I know is, that it is dual to the boundary operator of singular homology. However I would prefer a more direct interpretation.

  2. What is a conceptually nice definition of the exterior derivative?

Jan Weidner
  • 12,846
  • 5
    Related: http://mathoverflow.net/questions/10574/how-do-i-make-the-conceptual-transition-from-multivariable-calculus-to-differenti – Qiaochu Yuan Apr 11 '10 at 19:11

11 Answers11

85

Many years back I wrote something about an intuitive way to look at differential forms here. In particular, figure 4 illustrates Stokes' theorem in a way that generalises to higher dimensions. Note that these are just sketches for intuition, and I've found them useful for illustrating various fields arising in physics, but they're not anything rigorous. They're also, in some sense, dual to the diagrams in Misner, Thorne and Wheeler. (There are some errors in my document, but I lost the source code many years ago...)

Illustration of Stokes's Theorem from linked document.

Dan Piponi
  • 8,086
  • Wow, this is just awesome :) I thought I already had good pictures in my mind of how a differential form looks, but I was wrong! – Jan Weidner Apr 11 '10 at 19:49
  • 7
    Very nice indeed. I strongly recommend reading the linked file. – André Henriques Feb 19 '11 at 22:48
  • 5
    Using these sorts of pictures, how do you see that the derivative of $dx + y;dz$ is $dy \wedge dz$? Can you even draw a picture for $dx + y;dz$? These pictures seem to characterize differential forms as "foliations of varying density," but I'm pretty sure that's only true for forms that are locally a function times an exact form, and there are lots of forms—for example, $dx + y;dz$—that don't look like this. – Vectornaut Nov 06 '14 at 01:52
  • 3
    By the way, in case you haven't seen it already, Gabriel Weinreich's Geometrical Vectors introduces differential forms and the exterior derivative from the perspective you describe. – Vectornaut Nov 06 '14 at 01:57
  • 1
    @Vectornaut We could think of $dx + y, dz$ as a formal sum of your "foliations", where we have a density of planes parallel to the $x$-axis and simultaneously a density of (half-)planes parallel to the $z$-axis. It would be somewhat analogous to how we can talk about the boundary of a linear combination of overlapping simplicies in homology. – epimorphic Jan 07 '16 at 20:58
  • 5
    The visualizations you refer to in Misner, Thorne, and Wheeler were originated by J.A. Schouten, and first presented in Ricci-Calculus: An Introduction to Tensor Analysis and its Geometrical Applications http://www.amazon.com/Ricci-Calculus-Introduction-Applications-mathematischen-Wissenschaften/dp/3540018050 . William L. Burke has given more concise and modern presentations, e.g., in Applied Differential Geometry. –  Jan 18 '16 at 16:11
  • 1
    The link seems to have rotted away. Does anyone still have a link to the file? – Asvin Sep 05 '17 at 06:37
  • 2
    @Asvin I've made a new link – Dan Piponi Sep 05 '17 at 13:52
67

I think that the best explanation is in Arnold's book "Mathematical methods of classical mechanics". Here it is: after fixing a chart on a manifold one can say that the value of $d\omega$ ($\omega$ is a n-form) on tangent vectors $(\xi_1, ...,\xi_{n+1})$ at point $x_0$ equals to the coefficient of the $(n+1)$-linear part of the function $F(\varepsilon)=\int_{\partial V(\varepsilon)} \omega$, where $V(\varepsilon)$ is a "curvilinear parallelepiped" with vertexes $x_0, x_0+\varepsilon \xi_1, ..., x_0+\varepsilon \xi_{n+1}$: $F(\varepsilon)=(d\omega)(x_0)(\xi_1, ...,\xi_{n+1})\varepsilon^{n+1}+o(\varepsilon^{n+1})$.

Petya
  • 4,686
  • This is a good explanation too! – j.c. Apr 11 '10 at 20:29
  • 24
    In my opinion this is really "the" interpretation. Note that it treats functions and forms on an equal footing: the definition of the derivative of a function is exactly given by the above. Proving that $d\omega$ is in fact a form from this definition is also quite enlightening. Pedagogically, it is also great. It basically says "$d\omega$ is what makes stokes theorem true on 'infinitesimal' parallelepipeds". – Steven Gubkin Dec 20 '13 at 22:13
  • 3
    It is an extension of the definitions in vector calculus, such as $\mathrm{div}F(p)=\lim\frac1{\pi r^2}\int_{\partial D_r(p)}F\cdot n$. – timur Nov 14 '19 at 22:04
45

For 1-forms, you can get some intuition for exterior differentiation from how it shows up in Frobenius's theorem which states that a distribution D is integrable if and only if the ideal of differential forms that are annihilated by it is closed under exterior differentiation:

Let $\alpha$ be a 1-form on $M$. If $\alpha$ does not vanish, then ker $\alpha_x$ is a hyperplane in the tangent space to $M$ at $x$. Thus ker $\alpha$ is a hyperplane field in $TM$ (and is an example of a distribution). At every point in M, you should visualize a hyperplane passing through that point.

Frobenius's theorem gives conditions on whether this hyperplane field is integrable, that is, if one can fit the planes together to form a foliation by hypersurfaces in $M$. For a hyperplane field defined by a single 1-form one can fit the planes together if and only if $d\alpha$ mod $\alpha$ is zero. This is usually expressed by the vanishing of $\alpha\wedge d\alpha$.

(In the general case, where instead of $\alpha$ we have a set of linearly independent 1-forms $\{\alpha_j\}_{j=1}^r$, the ideal in the algebra of differential forms on $M$ generated by $\{\alpha_j\}_{j=1}^r$ must be closed under exterior differentiation; equivalently $d\alpha_j\wedge\alpha_1\wedge\cdots\wedge\alpha_r=0$ for all $j$).

Two simple examples:

(1) if $\alpha=df$ then the field of hyperplanes ker $\alpha$ is actually tangent to the hypersurfaces $f=$const (and of course $d\alpha=0$).

(2) If $\alpha = g df$ for some non-vanishing function $g$, e.g. $\alpha=ydx$ in the upper half plane of $\mathbb{R}^2$, then this is just as good, since ker $\alpha$ is still tangent to $f=$const. Note that $d\alpha=dg\wedge df=(dg/g)\wedge\alpha$, which vanishes mod $\alpha$ and thus $\alpha\wedge d\alpha=0$.

Hence $\alpha\wedge d\alpha$, or $d\alpha$ mod $\alpha$ roughly measures how far this hyperplane field defined by ker $\alpha$ is from being tangent to hypersurfaces.

(I got the ideas from Appendix B of Ivey and Landsberg's book Cartan for Beginners. Thanks to Marcos Cossarini and Ben McKay for pointing out in the comments that the original version of this was wrong!)

Here's an example of a hyperplane field which is not tangent to any hypersurfaces. $\alpha = dz-y dx$ on $\mathbb R^3$ and $\alpha\wedge d\alpha = dz\wedge dx \wedge dy$:

standard contact structure on R^3

j.c.
  • 13,490
  • Great answer! I wish I could accept both, your answer and sigfpe's. These two answers also fit nicely together. – Jan Weidner Apr 11 '10 at 20:28
  • 1
    I wonder if there's a way to "see" the exterior derivative for k-forms with k>1 along these lines, see e.g. t3suji's comment here http://mathoverflow.net/questions/12266/frobenius-theorem-for-subbundle-of-low-regularity/12269#12269 – j.c. Apr 11 '10 at 20:34
  • 2
    If M=R^2, the 1-forms w=dx and q=ydx determine the same (integrable) distribution of hyperplanes, but dq is not zero. Perhaps the proposition should be weaker: A 1-form w determines an integrable distribution iff a non vanishing real-valued function f exists such that d(fw)=0. (¿Is this true?) – Marcos Cossarini Jul 07 '10 at 03:19
  • 3
    Not quite right. Integrability of the kernel of $\alpha$ is measured by $\alpha \wedge d\alpha$, not by $d\alpha$. – Ben McKay Feb 21 '13 at 21:28
  • 4
    After 5 and a half years, I've finally fixed this. Thanks! – j.c. Sep 23 '15 at 13:19
  • Cool answer, this hyperplane story is exactly what's going on with the Pfaffian system of equations. This leads to Frobenius' theorem in terms of forms. Note that we also can state Frobenius theorem in a dual form in terms of vector fields and the Poisson bracket. – Rachid Atmai Nov 18 '20 at 18:18
36

The exterior derivative is the unique (sequence of) linear map $d: \mathcal{A}^p (M) \to \mathcal{A}^{p+1}$, such that the following axioms hold:

  1. for a function $f$, $df$ is the total differential.
  2. For any function $f$ and any differential form $a$, the Leibniz rule $d(fa)= df \wedge a + f da$ holds.
  3. For any diffeomorphism $\phi: M \to N$, you have $\phi^{\ast} \circ d = d \circ \phi^{\ast}$.

I think that 3 is more natural or at least easier to motivate than the usual $dd=0$. But both properties are really equivalent.

Proof (of uniqueness): 2. implies locality, i.e. the value of $d a$ at a point $x \in M$ only depends on the value of $a$ in a neighborhood of $x$. This, together with the axiom 3, shows that it is enough to consider $M =\mathbb{R}^n$.

The group $\mathbb{R}^n$ acts by translations on $\mathbb{R}^n$. By axiom 3, for any translation-invariant form $a$ on $\mathbb{R}^n$, the form $da$ is again translation-invariant.

On the other hand, each nonzero $\lambda \in \mathbb{R}$ gives rise to the diffeomorphism $h_{\lambda}:x \mapsto \lambda x$ of $\mathbb{R}^n$. It is easy to check that it acts on translation-invariant $p$-forms by multiplication with $\lambda^p$. Thus for any translation-invariant $p$-form $a$, you get

$$\lambda^p d a = d (\lambda^p a) = d (h_{\lambda}^{\ast} a ) = h_{\lambda}^{\ast} d a = \lambda^{p+1} da,$$

which implies that any translation-invariant form is closed. Finally, note that any $p$-form on $\mathbb{R}^n$ can be written as a linear combination of translation-invariant form, with coefficients in $C^{\infty}(\mathbb{R}^n)$ (a basis for the translation-invariant forms is formed by the usual elements $dx_{i_1} \wedge \ldots \wedge x_{i_p}$).

From axioms 1 and 2, you now conclude that $d$ must be the exterior derivative that you knew before. This, of course, implies all the other properties of $d$.

  • 3
    This is great! It's the first definition of exterior differentiation that ever really made sense to me. I think I'll be using this one from now on. – Vectornaut Sep 14 '12 at 21:05
  • I teach this (among other answers) and I also like to assign as a challenge that, if we just ask for a map $d : \Omega^1 \to (\Omega^1)^{\otimes 2}$ obeying (1)-(3), then it is forced to land in the alternating tensors. (Presumably, a similar argument applies for any $p$, but $p=1$ seems hard enough for the students.) – David E Speyer Sep 05 '17 at 14:19
  • 1
    In fact, property 3 already characterizes the exterior derivative uniquely (up to a scalar multiple), see "Natural operations on differential forms" by Palais. – Kostya_I Jun 09 '20 at 18:29
28

There is a following (it seems to me it is not well-known but interesting) approach to differential forms. I'll try to reproduce it here. In this approach the exterior derivative is a very simple operation.

What is a differential k-form on a manifold $M$? Consider a (k+1)-product $V_{k+1}(M)=M\times...\times M$. Denote by $S_k(M)$ the space of all smooth skew-symmetric (with respect to a product structure) real functions on $V_{k+1}$. Obviously any function from $S_k(M)$ equals to zero on the diagonal $\Delta=$ {$(x,x,...,x)| x\in M$}.

We define a subspace $L_k(M) \subset S_k(M)$ as follows: $L_k(M)$ consists of all elements of $S_k(M)$ of order smaller then $k$ along the $\Delta$. In other words, $f\in L_k(M)$ if and only if for any smooth path $I(t)$ starting on the diagonal (i.e. $I(0)\in \Delta$) holds $f(I(t))=o(t^k)$.

Then one can identify the space of all k-forms $\Omega_k(M)$ with a quotient $S_k(M)/L_k(M)$.

What is the exterior derivative? Consider the following operator $\delta: S_k(M)\to S_{k+1}(M)$, $\delta f(x_1,...,x_{k+2}) =\sum (-1)^{i+1} f(x_1,..,\hat{x_i},...,x_{k+2})$. One can check that $\delta (L_k(M))\subset L_{k+1}(M)$ and that the induced operator $\Omega_k(M)=S_k(M)/L_k(M)\to S_{k+1}(M)/L_{k+1}(M)=\Omega_{k+1}(M)$ coincides with the exterior derivative $d$.

I know that approach from B.L. Feigin's lectures on multidimensional calculus (in Russian here: http://ium.mccme.ru/f98/calcman.html).

Petya
  • 4,686
  • 3
    This is cool! Am I right in thinking that you can identify the set of k-vectors on M with the set of derivations on S_k, just as you can identify the set of 1-vectors on M with the set of derivations on S_1? – Vectornaut Apr 12 '10 at 02:46
  • 6
    This seems to me analogous to the way algebraic geometers define $\Omega^{1}$ as $\mathcal{I}\Delta/\mathcal{I}\Delta^{2}$, where $\mathcal{I}_\Delta$ is the ideal sheaf of the diagonal. – Qfwfq Apr 12 '10 at 16:39
  • 1
    At the end of your third paragraph, should $I(t) = \mathrm o(t^k)$ be $f(I(t)) = \mathrm o(t^k)$? – LSpice Jan 19 '16 at 15:19
  • I'd really like to read more about this approach, but unfortunately I can't read Russian. Do you happen to know of a reference in English, Spanish, or (as a last resort) some other Romance language? The closest thing I've found is Anders Kock's writing on synthetic differential geometry (for example, Section I.18 of Synthetic Differential Geometry), but it carries a lot of very general baggage; I just want to learn about plain old smooth manifolds. (As a very last resort, could you point me to where in Feigin's lectures this stuff can be found?) – Vectornaut Nov 17 '17 at 21:34
  • 1
    @Vectornaut: It is in Lecture 9. The $\delta$ is defined in Lecture 8. – timur Nov 18 '19 at 01:21
  • @timur: Thanks so much! Reading the original seems much less daunting now that I can see how the relevant lectures are organized. One of these days I'll pull out my dictionary and have a go at it. – Vectornaut Nov 23 '19 at 01:58
17

For 2: it is the unique extension of the total differential $d:C^\infty(M)\to\Omega^1(M)$ to a graded derivation of the algebra $\Omega^\bullet(M)$ of differential forms.

The map $d:C^\infty(M)\to\Omega^1(M)$ itself has a nice characterization as a universal derivation of the algebra $C^\infty(M)$ of functions satisfying certain rather reasonable conditions---this follows from Jaak Peetre's theorem.

  • 6
    The definition of a graded derivation was originally just a natural generalisation of $d$, so this approach is almost circular, and I can't visualise it geometrically. – Ben McKay Feb 21 '13 at 21:33
  • 2
    Well, my point is that you need only visualize the component in degree zero, as the rest is simply formalities. – Mariano Suárez-Álvarez Feb 21 '13 at 23:27
  • @Mariano Suárez-Alvarez Do you mean that for some subalgebra $A$ of $C^{\infty}(M)$ and $\Omega^1(M)$ truncated to be $A-$module $(A,\Omega^1(M))$ is Kähler differential? Could you give some refrences to your statement? – Fallen Apart May 26 '15 at 14:24
  • @FallenApart, I don't understand exactly what statement you mean. – Mariano Suárez-Álvarez May 26 '15 at 15:27
  • 2
    @Mariano Suárez-Alvarez The sentence: "The map d:C∞(M)→Ω1(M) itself has a nice characterization as a universal derivation of the algebra C∞(M) of functions satisfying certain rather reasonable conditions" – Fallen Apart May 26 '15 at 15:40
  • 3
    @FallenApart, ah. No , I do not mean that $\Omega^1(M)$ is the module of Kähler differentials of $C^\infty(M)$ (mostly, because it isn't! :) ) The operator $d:C^\infty(M)\to\Omega^1(M)$ can be characterized in terms of its functorial properties. This is surely done in detail in the book Natural Operations in Differential Geometry by Kolar, Michor and Slovak. – Mariano Suárez-Álvarez May 26 '15 at 16:14
  • @FallenApart: Actually, differential 1-forms are precisely Kähler differentials of C^∞(M) if you work with C^∞-rings instead of ordinary commutative rings. See, for example, https://ncatlab.org/nlab/show/Kähler+differential. – Dmitri Pavlov Jun 08 '20 at 22:25
9

To start with 0-forms, $df$ codes how $f$ varies. In fact, it does this in a way that is, IMO, more natural than partial derivatives.

For example, if I want to know how $z = x^2 y$ varies with $x$ — actually that's a lie, I want to know how $z$ varies with $x$ as $y$ is held constant — then I compute an exterior derivative, setting $dy=0$:

$$ dz = 2xy \, dx + x^2 \, dy \equiv 2xy \, dx \pmod{dy} $$

Similarly, as (tangent) vectors are dual to one-forms, we can see that the exterior derivative is the thing you combine with a vector to get a directional derivative.

This is further supported by path integrals; if $\gamma$ is a path from $P$ to $Q$, then $\int_\gamma \, df = f(Q) - f(P)$; so again we see that $df$ is an encoding of how $f$ varies, and the path integral is how we accumulate the variation into a finite difference.

We can argue that $d(df)$ should be zero, ans the variation in the variation of $f$ is second derivative information, and differential forms are only intended to capture first derivative information. Similarly for $df \, df$.

Stokes' theorem expresses the analog of the fundamental theorem of calculus in higher dimensions, giving a way to see the exterior derivative of a differential form as encoding the higher degree variation.

Alternatively, we can appeal to Fubini's theorem to reduce to the one-dimensional case: here's a sketch.

Let x = $(x_1, \ldots, x_n)$ and $dx = dx_1 dx_2 \ldots dx_n$.

Suppose you wish to integrate $$ \int_X df \, dx $$ where $X$ is an $(n+1)$-dimensional region. If we let $X_x$ be the one-dimensional region defined by a constant value of $x$, then generalizing Fubini's theorem, we can write this as an iterated integral $$ \int_Y \left( \int_{X_x} df \right) dx $$ where $Y$ is some suitable $n$-dimensional space.

The integral $\int_{X_x} df$ is just the alternating sum of the values $f(P)$ where $P$ iterates over the endpoints of the curves comprising $X_x$, where the upper endpoints are added and the lower endpoints are subtracted. It's convenient to write this as an integral over a zero-dimensional surface: $\int_{\partial X_x} f$.

Consequently, the original integral can be written as $$ \int_Y \left( \int_{\partial X_x} f \right) dx $$ and again essentially by Fubini's theorem, we can identify this with $$ \int_{\partial X} f \, dx $$

Consequently, defining $d(f \, dx)$ as $df dx$ is exactly the right thing to do to generalize the fundamental theorem of calculus to get Stoke's theorem.

7

First define the exterior derivative for forms defined on an open set $U \subseteq \mathbb{R^n}$. This uses the notion of integration of a $p$-form over a singular $p$-chain, which needs only the integration of $\mathcal{C}^{\infty}$-functions over compact subsets of $\mathbb{R}^p$ and runs as follows. A singular $p$-cube in $U$ is a $\mathcal{C}^{\infty}$-map $\sigma : I^p \rightarrow U$, where $I := [0,1]$ is the closed unit interval. Let $\Omega^p(U) := H^0(U;\wedge^pT^*U)$ the space of alternating $p$-forms on $U$; then each $\omega \in\Omega^p(U)$ pulls back to a top form $\sigma^*\omega$ $=$ $f dx \in \Omega^p(I^p)$ with $f \in \mathcal{C}^{\infty}(I^p)$ and $dx = dx_1 \wedge \cdots \wedge dx_p$ the canonical volume element of $\mathbb{R}^p$. It thus has an integral $$ \int_{\sigma} \omega := \int_{I^p} f dx, $$ and, in fact, this it is what differential forms are made for: born to be integrated.

Next define the vector space of $p$-chains to be the free $\mathbb{R}$-vector space on the singular $p$-cubes, so that a $p$-chain $c_p$ is a formal linear combination of singular $p$-cubes: $$ c_p = \sum_{i=1}^k \gamma^i \sigma_i \quad,\quad k\in\mathbb{N}, \gamma \in \mathbb{R}.\tag{1} $$ The integral then extends to $p$-chains by linearity; $$ \int_{c_p}\omega := \sum_{i=1}^k \gamma^i \int_{\sigma_i} \omega. $$

As a next ingredient we need that any $p$-chain $c_p$ has a boundary $\partial c_p$ which is a $(p-1)$-chain. We first define it on singular $p$-cubes $\sigma$ by $$ \partial \sigma := \sum_{j=1}^p (-1)^j (\sigma \circ d^j_- - \sigma \circ d^j_+), $$ where the singular $(p-1)$-cubes $d^j_{\mp}$ in $I^p$ define the $j$-th front and back boundary faces: $$ d^j_-(x^1, \dots, x^{p-1}) := (x^1, \dots, x^j, 0, x^{j+1}, \dots, x^{p-1}), $$ $$ d^j_+(x^1, \dots, x^{p-1}) := (x^1, \dots, x^j, 1, x^{j+1}, \dots, x^{p-1}). $$ We extend this boundary operator to $p$-chains by linearity: $$ \partial c_p := \sum_{i=1}^k \gamma^i \partial \sigma_i $$ with $c_p$ given by (1).

As a last ingredient we need that a point $P \in U$ and a $p$-tuple of vectors $X:= (X_1, \dots, X_p)$ (viewed as tangent vectors at $P$ to $U$) define a singular $p$-chain $$ [X]_P : I^p \rightarrow U $$ via $$ [X]_P(x^1, \dots, x^p) := P+\sum_{k=1}^p x^k X_k $$ as soon as the $X_k$ are so small that all the $P+x^kX_k$ are in $U$ for all $k$. In fact, these simple linear singular chains are all what is needed of this formalism to define the exterior derivative, to which we proceed next.

After this preliminaries, we now want, given $\omega \in \Omega^p(U)$, define its exterior derivative $d\omega \in \Omega^{p+1}(U)$. We do this pointwise at any point $P \in U$ by exhibiting the value $d\omega_P$, as an alternating $(p+1)$-form, takes on any $(p+1)$-tuple of (tangent) vectors $(X_1, \dots, X_{p+1})$ $\in$ $(\mathbb{R^n})^{p+1}$. We define $$ \fbox{$d\omega_P(X_1, \dots, X_{p+1}) := \lim_{t \rightarrow 0} \dfrac{1}{t^{p+1}} \int_{\partial([tX]_P)} \omega.$} $$

Finally, for the general case of the exterior derivative of a $p$-form $\omega$ on an $n$-dimensional manifold $M$, just take charts $\phi: V \rightarrow U$ with $V$ open in $M$, $U$ open in $\mathbb{R}^n$, with the $V$ covering $M$, and put $$ (d\omega)|V := \phi^*d\eta \quad \text{with}\quad \eta := (\phi^{-1})^*(\omega|V) \in \Omega^p(U). $$ The transformation formula for multivariate integrals then shows that the $(d\omega)|V$ glue well on the overlaps, thus yielding a global well-defined $d\omega$.

Loosely speaking, this defines the exterior derivative as a "volume derivative", a flux density through the boundary of an infinitesimal $(p+1)$-dimensional parallelepiped and so has as a built-in an infinitesimal version of Stokes' Theorem.

MathCrawler
  • 1,000
  • 7
  • 11
  • 5
    This is similar in parts to the answer by Petya, although you have included more details. – KConrad Aug 05 '20 at 02:09
  • This reasoning on the exterior derivative seems the most intuitive of all to me. That's also how it's interpreted in e.g. R.W.R. Darling's book Differential Forms and Connections, which on its turn took it from Hubbard-Hubbard's famous vector calculus book. The exterior derivative is literally introduced and defined there like this. Finally, the Hubbards' approach is indebted by them to Arnol'd's book Mathematical Methods of Classical Mechanics, as in Petya's "higher ranked" answer above and as already remarked by KConrad here. I do think, though, the Hubbards' exposition to be clearer. – Pedro Lauridsen Ribeiro Jul 25 '23 at 19:03
6

The exterior derivative is an intrinsic way of talking about the gradient of a function. If you want to understand the intuitive meaning of the exterior derivative of $f$ you should make sure you understand $\nabla f$ properly. I am a little hesitant to post such an answer 5 years into the discussion but as I did not find any occurrence of the string "gradient" on the page I thought this might be useful. In the presence of a metric the relation between them is $\langle \nabla f, V\rangle=df(V)$ for tangent vectors $V$.

Mikhail Katz
  • 15,081
  • 1
  • 50
  • 119
  • 1
    To my mind the gradient only makes sense if you have a metric and may only be compared to $d$ on the level of $0$-forms. So it should not be necessary to understand $\nabla$ properly, before understanding $d$. – Michael Bächtold Jan 18 '16 at 13:59
  • 3
    @MichaelBächtold, if one doesn't understand it at the level of $0$-forms one will certainly have trouble understanding it in general. Of course the gradient depends on the metric; that's why it needs to be replaced by the exterior derivative if one wants to work intrinsically. – Mikhail Katz Jan 18 '16 at 14:04
  • 3
    I mostly agree with your comment. But I'm not convinced that one should understand $\nabla f$ first. I have the impression that it may even hinder understanding, since $\nabla f$ is usually taught without explaining how it depends on more than $f$. Hence people have to forget hidden assumptions when they try to understand $df$. – Michael Bächtold Jan 18 '16 at 14:47
  • 2
    There are some basic elementary geometric facts that form the foundation of understanding here, one of which being that a one-variable function is constant if its derivative is zero, that gradient has to do with direction of steepest descent, etc. Doing research in mathematics is not a formal game but rather involves understanding, without which it is difficult to do research. – Mikhail Katz Jan 18 '16 at 14:50
  • 5
    I'm not sure how to interpret your last comment. In case you are suggesting that without a metric the exterior differential necessarily carries no geometric meaning and reduces to its formal rules (Leibniz, $d^2=0$, naturality etc.) then I would disagree. Dan Piponis answer is an attempt to give a geometric understanding without a metric. Another quite elementary and intuitive approach can be found in Anders Kocks book Synthetic Geometry of Manifolds. His approach makes the statement "dual to the boundary operator of singular homology" directly accessible at the infinitesimal level. – Michael Bächtold Jan 18 '16 at 15:18
  • There are certainly situations where the metric is absent; for example a differentiable manifold doesn't carry one without making choices :-) However when an OP asks for an explanation it seems a pity to skip the most crucial first step :-) – Mikhail Katz Jan 18 '16 at 15:19
  • 1
    katz, that (comment Jan 18 at 14:50) seems like an uncharitable interpretation of what @MichaelBächtold is saying. It certainly doesn't sound like he is saying that your suggestion is logically or pedagogically ridiculous or unsound, or advocating pointless formality. To support his point in a more basic context, think of the difficulty students, taught to think that "the derivative is a number", have in understanding the fact that the derivative of a function $\mathbb R^n \to \mathbb R^m$ is a linear map. (At least, I struggled with it as an undergraduate.) – LSpice Jan 22 '16 at 01:09
  • 1
    @LSpice, you are talking about a reformulation of the derivative at a higher level of abstraction. My reaction to this is the same as my reaction to Michael's comment: skipping the essential first step (and going on directly to the reformulation) is likely to do more harm than good to a majority of students. – Mikhail Katz Sep 05 '17 at 15:12
  • Yes— the exterior derivative (and differential forms in general) can be justified as the result of adding “correction factors” to familiar constructions from calculus in order to make them coordinate invariant. – Vik78 Jul 26 '23 at 11:44
6

Another conceptually nice definition of the exterior derivative is given in Bourbaki (Varietes differentielles et analytiques, Fascicule de resultats), (8.3.4) and (8.3.5). The idea is the following: if $\omega$ is an exterior $p$-form on $X$, consider it as a section $\omega: X \to \Omega^p(X)$ of the bundle $\Omega^p(X)$ of $p$-forms. It makes sense to take its derivative $d\omega$ at each point $x \in X$. Then one sees that $d\omega$ corresponds to a $p+1$ exterior form.

By the way, a natural and simple definition of tangent vector on a smooth manifold is given in the same book in (5.5.1).

ಠ_ಠ
  • 5,933
fcukier
  • 169
  • 10
    So the derivative of $\omega \colon X \to \Omega^p(X)$ at $x \in X$ is I guess the tangent map $T_x(\omega) \colon T_x X \to T_{\omega(x)} \Omega^p(X)$. How do you get the $p+1$-form? – Michael Murray Feb 21 '13 at 09:55
  • 1
    That doesn't quite work. Consider applying it to the differential form $x , dx$ on the real number line, where the differential as a map is not everywhere the same as the differential of the map associated to the zero 1-form. But the exterior derivative is the same. – Ben McKay Jun 09 '20 at 04:52
  • This seems to be saying that $\omega$ is a smooth section of the exterior $p$-form bundle $\Omega^p(X)$, $X$ some manifold. Indeed the tangent map is given by $f_:T_x(\omega): T_x X\to T_{\omega(x)} \Omega^p(X)$ but $d$ doesn't apply to it, $d$ applies the linear map $f^: A( \Omega^p(X)) \to A(X)$, between the spaces of exterior differential forms, i.e spaces of smooth sections. I think this is how we should get the $p+1$-form. – Rachid Atmai Nov 18 '20 at 19:07
2

Exterior derivative of differential p-form $\omega$ can be defined by "(p+1)-linear part of the value of $\omega$ integrated over the boundary of infinitesimal (p+1)-parallelotope".

More specifically, $$d\omega(v_1,v_2,...,v_{p+1})\\ =\lim_{t\to0}\frac1{t^{p+1}}\int_{\partial[tv_1,tv_2,...,tv_{p+1}]}\omega$$

where $[tv_1,tv_2,...,tv_{p+1}]$ is (p+1)-parallelotope spanned by $tv_1,tv_2,...,tv_{p+1}$.

This aspect of exterior derivative is already mentioned by Petya and MathCrawler, but there is no proof why it is equal to the standard definition of exterior derivative . So I'll give you.


It suffices to prove that

$$d\big(f(x_1,x_2,...,x_n)x_1\wedge x_2\wedge ...\wedge x_p\big)(v_1,v_2,...,v_{p+1})\\ =\sum_{i\in \{ 1,2,3,...,n \} }\frac{f(x_1,x_2,...,x_n)}{\partial x_i} x_i\wedge x_1\wedge x_2\wedge ...\wedge x_p (v_1,v_2,...,v_{p+1})$$

is equal to $$\lim_{t\to0}\frac1{t^{p+1}}\int_{\partial[tv_1,tv_2,...,tv_{p+1}]}f(x_1,x_2,...,x_n)x_1\wedge x_2\wedge ...\wedge x_p$$

Suppose $U\subset \mathbb{R^n} $ is open, $\sigma^t:[0,t]^{p+1}\to U$ is $C^{\infty}$and, $\mathbf{t} \mapsto \big(\sigma^t_1(\mathbf{t}),\sigma^t_2(\mathbf{t}),\sigma^t_3(\mathbf{t}),...,\sigma^t_n(\mathbf{t})\big)\in U$ $f(x_1,x_2,...,x_n)x_1\wedge x_2\wedge ...\wedge x_p$ is differential p-form on $U$

In order to define $\partial\sigma^t$ with induced orientation, let $$d^j_-(t_1, \dots, t_{p+1})=(t_1, \dots, t_{j-1}, 0, t_{j+1}, \dots, t_{p+1}),\\ d^j_+(t_1, \dots, t_{p-1}) = (t_1, \dots, t_{j-1}, t, t_{j+1}, \dots, t_{p+1}).$$

Then $\partial \sigma^t = \sum_{j=1}^{p+1} (-1)^j(\sigma^t \circ d^j_- - \sigma^t \circ d^j_+)$, and it can be computed as below

$$ \begin{align} &\lim_{t\to0}\frac1{t^{p+1}}\int_{\partial\sigma^t}f(x_1,x_2,...,x_n)x_1\wedge x_2\wedge ...\wedge x_p\\ =&\lim_{t\to0}\frac1{t^{p+1}}\sum_{j=1}^{p+1}(-1)^j\int_{(\sigma^t \circ d^j_- - \sigma^t \circ d^j_+)}f(x_1,x_2,...,x_n)x_1\wedge x_2\wedge ...\wedge x_p\\ =&\lim_{t\to0}\frac1{t^{p+1}}\sum_{j=1}^{p+1}(-1)^j\Big(\int_{\sigma^t\circ d^j_-} f(x_1,x_2,...,x_n)x_1\wedge x_2\wedge ...\wedge x_p \\ &-\int_{\sigma^t\circ d^j_+} f(x_1,x_2,...,x_n)x_1\wedge x_2\wedge ...\wedge x_p\Big)\\ =&\lim_{t\to0}\frac1{t^{p+1}}\sum_{j=1}^p(-1)^j\Big(\int_{[0,t]^p}f(\sigma^t\circ d^j_-)det \begin{vmatrix} \frac{\sigma^t_1\circ d^j_-}{\partial t_1} & \cdots & \frac{\sigma^t_1\circ d^j_-}{\partial t_{j-1}} &\frac{\sigma^t_1\circ d^j_-}{\partial t_{j+1}}& \cdots &\frac{\sigma^t_1\circ d^j_-}{\partial t_{p+1}}\\ \vdots & \ddots & \vdots & \vdots &\ddots&\vdots \\ \frac{\sigma^t_{l}\circ d^j_-}{\partial t_1} &\cdots & \frac{\sigma^t_l\circ d^j_-}{\partial t_{j-1}}& \frac{\sigma^t_l\circ d^j_-}{\partial t_{j+1}} & \cdots &\frac{\sigma^t_l\circ d^j_-}{\partial t_{p+1}}\\ \vdots & \ddots &\vdots &\vdots & \ddots & \vdots \\ \frac{\sigma^t_{p}\circ d^j_-}{\partial t_1} & \cdots & \frac{\sigma^t_{p}\circ d^j_-}{\partial t_{j-1}} &\frac{\sigma^t_{p}\circ d^j_-}{\partial t_{j+1}} & \cdots&\frac{\sigma^t_{p}\circ d^j_-}{\partial t_{p+1}} \end{vmatrix}dt_1\dots dt_{j-1}dt_{j+1}\dots dt_p \\ &-\int_{[0,t]^p}f(\sigma^t\circ d^j_+)det \begin{vmatrix} \frac{\sigma^t_1\circ d^j_+}{\partial t_1} & \cdots & \frac{\sigma^t_1\circ d^j_+}{\partial t_{j-1}} &\frac{\sigma^t_1\circ d^j_+}{\partial t_{j+1}}& \cdots &\frac{\sigma^t_1\circ d^j_+}{\partial t_{p+1}}\\ \vdots & \ddots & \vdots & \vdots &\ddots&\vdots \\ \frac{\sigma^t_{l}\circ d^j_+}{\partial t_1} &\cdots & \frac{\sigma^t_l\circ d^j_+}{\partial t_{j-1}}& \frac{\sigma^t_l\circ d^j_+}{\partial t_{j+1}} & \cdots &\frac{\sigma^t_k\circ d^j_+}{\partial t_{p+1}}\\ \vdots & \ddots &\vdots &\vdots & \ddots & \vdots \\ \frac{\sigma^t_{p}\circ d^j_+}{\partial t_1} & \cdots & \frac{\sigma^t_{p}\circ d^j_+}{\partial t_{j-1}} &\frac{\sigma^t_{p}\circ d^j_+}{\partial t_{j+1}} & \cdots&\frac{\sigma^t_{p}\circ d^j_+}{\partial t_{p+1}} \end{vmatrix} dt_1\dots dt_{j-1}dt_{j+1}\dots dt_p\Big)\\ \Big(=&\bigstar\Big) \end{align} $$

To Continue the computation, I will use a following fact.

$$\begin{align} &\lim_{t\to 0}\frac1{t^{p+1}}\int_{[0,t]^p}g(d^t_+)-g(d^t_-) dt_1\dots dt_{j-1}dt_{j+1}\dots dt_p\\ =&\lim_{t\to 0}\frac1{t^{p+1}}\int_{[0,t]^p}g(t_1,\dots ,t_{j-1},t,t_{j+1},\dots,t_{p+1})-g(t_1,\dots ,t_{j-1},0,t_{j+1},\dots,t_{p+1}) dt_1\dots dt_{j-1}dt_{j+1}\dots dt_p\\ =&\frac{\partial}{\partial t_j}g(t_1,\dots ,t_j,\dots,t_{p+1})\rvert_{t_{i=0}}\quad \Big(=\frac{\partial g}{\partial t_j}(0,\dots,0)\Big) \end{align}$$

This fact can be proved by iterated integration and taylor expansion around $(0,\dots ,0)$.

Then, $$\begin{align} \Big(\bigstar\Big) =&\sum_{j=1}^p(-1)^{j+1}\frac{\partial}{\partial t_j}f(\sigma^t)det \begin{vmatrix} \frac{\sigma^t_1}{\partial t_1} & \cdots & \frac{\sigma^t_1}{\partial t_{j-1}} &\frac{\sigma^t_1}{\partial t_{j+1}}& \cdots &\frac{\sigma^t_1}{\partial t_{p+1}}\\ \vdots & \ddots & \vdots & \vdots &\ddots&\vdots \\ \frac{\sigma^t_{l}}{\partial t_1} &\cdots & \frac{\sigma^t_l}{\partial t_{j-1}}& \frac{\sigma^t_l}{\partial t_{j+1}} & \cdots &\frac{\sigma^t_l}{\partial t_{p+1}}\\ \vdots & \ddots &\vdots &\vdots & \ddots & \vdots \\ \frac{\sigma^t_{p}}{\partial t_1} & \cdots & \frac{\sigma^t_{p}}{\partial t_{j-1}} &\frac{\sigma^t_{p}}{\partial t_{j+1}} & \cdots&\frac{\sigma^t_{p}}{\partial t_{p+1}} \end{vmatrix}_{\Big\lvert_{t_i=0}}\\ =&\sum_{j=1}^{p+1}(-1)^{j+1}\Big( \sum_{k=1}^n\frac{\partial f}{\partial x_k}\frac{\sigma^t_k}{\partial t_j}\Big)det \begin{vmatrix} \frac{\sigma^t_1}{\partial t_1} & \cdots & \frac{\sigma^t_1}{\partial t_{j-1}} &\frac{\sigma^t_1}{\partial t_{j+1}}& \cdots &\frac{\sigma^t_1}{\partial t_{p+1}}\\ \vdots & \ddots & \vdots & \vdots &\ddots&\vdots \\ \frac{\sigma^t_{l}}{\partial t_1} &\cdots & \frac{\sigma^t_l}{\partial t_{j-1}}& \frac{\sigma^t_l}{\partial t_{j+1}} & \cdots &\frac{\sigma^t_l}{\partial t_{p+1}}\\ \vdots & \ddots &\vdots &\vdots & \ddots & \vdots \\ \frac{\sigma^t_{p}}{\partial t_1} & \cdots & \frac{\sigma^t_{p}}{\partial t_{j-1}} &\frac{\sigma^t_{p}}{\partial t_{j+1}} & \cdots&\frac{\sigma^t_{p}}{\partial t_{p+1}} \end{vmatrix}_{\Big\lvert_{t_i=0}}\\ +&\sum_{j=1}^{p+1}(-1)^{j+1}f(\sigma^t)\frac{\partial}{\partial t_j}det \begin{vmatrix} \frac{\sigma^t_1}{\partial t_1} & \cdots & \frac{\sigma^t_1}{\partial t_{j-1}} &\frac{\sigma^t_1}{\partial t_{j+1}}& \cdots &\frac{\sigma^t_1}{\partial t_{p+1}}\\ \vdots & \ddots & \vdots & \vdots &\ddots&\vdots \\ \frac{\sigma^t_{l}}{\partial t_1} &\cdots & \frac{\sigma^t_l}{\partial t_{j-1}}& \frac{\sigma^t_l}{\partial t_{j+1}} & \cdots &\frac{\sigma^t_l}{\partial t_{p+1}}\\ \vdots & \ddots &\vdots &\vdots & \ddots & \vdots \\ \frac{\sigma^t_{p}}{\partial t_1} & \cdots & \frac{\sigma^t_{p}}{\partial t_{j-1}} &\frac{\sigma^t_{p}}{\partial t_{j+1}} & \cdots&\frac{\sigma^t_{p}}{\partial t_{p+1}} \end{vmatrix}_{\Big\lvert_{t_i=0}}\\ =&\sum_{k=1}^n\frac{\partial f}{\partial x_k}\sum_{j=1}^{p+1}(-1)^{j-1}\Big( \frac{\sigma^t_k}{\partial t_j}\Big)det \begin{vmatrix} \frac{\sigma^t_1}{\partial t_1} & \cdots & \frac{\sigma^t_1}{\partial t_{j-1}} &\frac{\sigma^t_1}{\partial t_{j+1}}& \cdots &\frac{\sigma^t_1}{\partial t_{p+1}}\\ \vdots & \ddots & \vdots & \vdots &\ddots&\vdots \\ \frac{\sigma^t_{l}}{\partial t_1} &\cdots & \frac{\sigma^t_l}{\partial t_{j-1}}& \frac{\sigma^t_l}{\partial t_{j+1}} & \cdots &\frac{\sigma^t_l}{\partial t_{p+1}}\\ \vdots & \ddots &\vdots &\vdots & \ddots & \vdots \\ \frac{\sigma^t_{p}}{\partial t_1} & \cdots & \frac{\sigma^t_{p}}{\partial t_{j-1}} &\frac{\sigma^t_{p}}{\partial t_{j+1}} & \cdots&\frac{\sigma^t_{p}}{\partial t_{p+1}} \end{vmatrix}_{\Big\lvert_{t_i=0}}\\ +&f(\sigma^t)\sum_{j=1}^{p+1}(-1)^{j-1}\frac{\partial}{\partial t_j}det \begin{vmatrix} \frac{\sigma^t_1}{\partial t_1} & \cdots & \frac{\sigma^t_1}{\partial t_{j-1}} &\frac{\sigma^t_1}{\partial t_{j+1}}& \cdots &\frac{\sigma^t_1}{\partial t_{p+1}}\\ \vdots & \ddots & \vdots & \vdots &\ddots&\vdots \\ \frac{\sigma^t_{l}}{\partial t_1} &\cdots & \frac{\sigma^t_l}{\partial t_{j-1}}& \frac{\sigma^t_l}{\partial t_{j+1}} & \cdots &\frac{\sigma^t_l}{\partial t_{p+1}}\\ \vdots & \ddots &\vdots &\vdots & \ddots & \vdots \\ \frac{\sigma^t_{p}}{\partial t_1} & \cdots & \frac{\sigma^t_{p}}{\partial t_{j-1}} &\frac{\sigma^t_{p}}{\partial t_{j+1}} & \cdots&\frac{\sigma^t_{p}}{\partial t_{p+1}} \end{vmatrix}_{\Big\lvert_{t_i=0}}\\ =\big(&\bigstar\bigstar\big) \end{align}$$

To continue the calculation further more, I will use two formula.

  1. $$\begin{align}&\sum_{j=1}^{p+1}(-1)^{j-1}\Big( \frac{\sigma^t_k}{\partial t_j}\Big)det \begin{vmatrix} \frac{\sigma^t_1}{\partial t_1} & \cdots & \frac{\sigma^t_1}{\partial t_{j-1}} &\frac{\sigma^t_1}{\partial t_{j+1}}& \cdots &\frac{\sigma^t_1}{\partial t_{p+1}}\\ \vdots & \ddots & \vdots & \vdots &\ddots&\vdots \\ \frac{\sigma^t_1}{\partial t_1} & \cdots & \frac{\sigma^t_1}{\partial t_{j-1}} &\frac{\sigma^t_1}{\partial t_{j+1}}& \cdots &\frac{\sigma^t_1}{\partial t_{p+1}}\\ \vdots & \ddots & \vdots & \vdots &\ddots&\vdots \\ \vdots & \ddots &\vdots &\vdots & \ddots & \vdots \\ \frac{\sigma^t_{p}}{\partial t_1} & \cdots & \frac{\sigma^t_{p}}{\partial t_{j-1}} &\frac{\sigma^t_{p}}{\partial t_{j+1}} & \cdots&\frac{\sigma^t_{p}}{\partial t_{p+1}} \end{vmatrix}_{\Big\lvert_{t_i=0}}\\ =&det \begin{vmatrix} \frac{\sigma^t_k}{\partial t_1} & \cdots & \frac{\sigma^t_k}{\partial t_{j-1}} &\frac{\sigma^t_k}{\partial t_{j}}&\frac{\sigma^t_k}{\partial t_{j+1}}& \cdots &\frac{\sigma^t_k}{\partial t_{p+1}}\\ \frac{\sigma^t_1}{\partial t_1} & \cdots & \frac{\sigma^t_1}{\partial t_{j-1}} & \frac{\sigma^t_1}{\partial t_{j}}&\frac{\sigma^t_1}{\partial t_{j+1}}& \cdots &\frac{\sigma^t_1}{\partial t_{p+1}}\\ \vdots & \ddots & \vdots & \vdots & \vdots &\ddots&\vdots \\ \frac{\sigma^t_{l}}{\partial t_1} &\cdots & \frac{\sigma^t_l}{\partial t_{j-1}} & \frac{\sigma^t_l}{\partial t_{j}} & \frac{\sigma^t_l}{\partial t_{j+1}} & \cdots &\frac{\sigma^t_l}{\partial t_{p+1}}\\ \vdots & \ddots &\vdots &\vdots &\vdots & \ddots & \vdots \\ \frac{\sigma^t_{p}}{\partial t_1} & \cdots & \frac{\sigma^t_{p}}{\partial t_{j-1}} & \frac{\sigma^t_{p}}{\partial t_{j}} &\frac{\sigma^t_{p}}{\partial t_{j+1}} & \cdots&\frac{\sigma^t_{p}}{\partial t_{p+1}} \end{vmatrix}_{\Big\lvert_{t_i=0}}\\ =&dx_k\wedge dx_1\wedge dx_2\wedge\dots\wedge dx_p (\frac{\sigma^t}{\partial t_{1}},\frac{\sigma^t}{\partial t_{2}},\dots,\frac{\sigma^t}{\partial t_{p+1}})_{\Big\lvert_{t_i=0}} \end{align}$$

  2. $$\begin{align} &\sum_{j=1}^{p+1}(-1)^{j-1}\frac{\partial}{\partial t_j}det \begin{vmatrix} \frac{\sigma^t_1}{\partial t_1} & \cdots & \frac{\sigma^t_1}{\partial t_{j-1}} &\frac{\sigma^t_1}{\partial t_{j+1}}& \cdots &\frac{\sigma^t_1}{\partial t_{p+1}}\\ \vdots & \ddots & \vdots & \vdots &\ddots&\vdots \\ \frac{\sigma^t_{l}}{\partial t_1} &\cdots & \frac{\sigma^t_l}{\partial t_{j-1}}& \frac{\sigma^t_l}{\partial t_{j+1}} & \cdots &\frac{\sigma^t_l}{\partial t_{p+1}}\\ \vdots & \ddots &\vdots &\vdots & \ddots & \vdots \\ \frac{\sigma^t_{p}}{\partial t_1} & \cdots & \frac{\sigma^t_{p}}{\partial t_{j-1}} &\frac{\sigma^t_{p}}{\partial t_{j+1}} & \cdots&\frac{\sigma^t_{p}}{\partial t_{p+1}} \end{vmatrix}_{\Big\lvert_{t_i=0}}\\ =&\sum_{\tau\in S_{p+1}} \big(sgn(\tau)\big) \frac{\sigma^t_{1}}{\partial t_{\tau(1)}\partial t_{\tau(2)}} \frac{\sigma^t_{2}}{\partial t_{\tau(3)}} \frac{\sigma^t_{3}}{\partial t_{\tau(4)}}\dots \frac{\sigma^t_{p}}{\partial t_{\tau(p+1)}} _{\Big\lvert_{t_i=0}}\\ =&0 \\ &\big( \text{since}\quad sgn(\tau)\frac{\sigma^t_{1}}{\partial t_{\tau(1)}\partial t_{\tau(2)}} =-sgn(\tau{'})\frac{\sigma^t_{1}}{\partial t_{\tau{'}(1)}\partial t_{\tau{'}(2)}} \quad \text{where}\quad \tau(1)=\tau^{'}(2),\tau(l)=\tau^{'}(l)\text{for} \quad l\geq3\big) \end{align}$$

Applying 1.and 2.to$\big(\bigstar\bigstar\big)$,I finally get $$ \begin{align} &\lim_{t\to0}\frac1{t^{p+1}}\int_{\partial\sigma^t}f(x_1,x_2,...,x_n)x_1\wedge x_2\wedge ...\wedge x_p\\ =&\sum_{k=1}^n\frac{\partial f}{\partial x_k} dx_k\wedge dx_1\wedge dx_2\wedge\dots\wedge dx_p (\frac{\sigma^t}{\partial t_{1}},\frac{\sigma^t}{\partial t_{2}},\dots,\frac{\sigma^t}{\partial t_{p+1}})_{\Big\lvert_{t_i=0}} \end{align} $$

Then when $\sigma^t(t_1,t_2,\dots ,t_{p+1})=\sum_i t_iv_i\in U$, and $\sigma^t(0)=(x_1,x_2,\dots,x_n)$,this is what I needed first.

khkh
  • 21