3

What is an example of a gradient vector field $X$ on a Riemannian manifold $(M,g)$ which cannot be converted to a divergence free vector field via the following processes:

  1. First we remove the singularities $S$ from $M$ then we set $M:=M\setminus S$

  2. We are allowed to reparameterize $X$ to $X:=fX$ for some positive function $f$

  3. We are allowed to change the initial Riemannian metric $g$ to a new metric $g'$ for computation of divergence of $X$ with respect to this new $g'$ to obtain a vector field $X$ on $M$ with $\operatorname{div}_{g'} X=0$.

In other words, with some abuse of terminology, we ask: "Is every gradient vector field a divergence free vector field?"

An obvious example is $X=x\partial_x+y\partial_y$ is a divergence free vector field on the punctured plane after rescaling $X:=\frac{1}{x^2+y^2}X$

Is the answer yes, at least in low dimensions?

YCor
  • 60,149
  • 2
    Yes --- just choose a Riemannian metric that is invariant with respect to the flow generated by $X$. It exists since the singularities are removed. – Anton Petrunin Jan 29 '21 at 19:57
  • @AntonPetrunin Yes I see you use gradient property here to ensure the flow is geodesible. – Ali Taghavi Jan 29 '21 at 20:05
  • @AntonPetrunin Thanks for your attention I realize that the flow of X keep invariant the level sets of f so our frame is $X, \nabla f$. But the motivation for my question is another thing I am giving my question in my next post. – Ali Taghavi Jan 29 '21 at 21:07
  • @AntonPetrunin My motivation for this question was the following: – Ali Taghavi Jan 29 '21 at 21:52
  • https://mathoverflow.net/questions/382577/p-xp-yq-xq-y-0-imply-no-limit-cycle – Ali Taghavi Jan 29 '21 at 21:53
  • @AntonPetrunin Sorry I am stillwrong since $X$ and $\nabla f$ are parallel. but what is the orthonormal frame you choose in the form of $X, Y$ where Y is parallel to the kernel of df and X is an appropriate rescalling of $\nabla f$ such that the flow of $X$ keep Y invariant? – Ali Taghavi Jan 30 '21 at 09:29

1 Answers1

2

I think the answer is no as soon as your gradient vector field admits a saddle point where the divergence is non-zero.

Let $\omega$ denote the volume form associated to the Riemann metric. We have $$\mathrm{div}(X) \omega = X\cdot \omega$$ where $X\cdot \omega$ denotes the Lie derivative. The goal is to find positive functions $f$ and $g$ such that $$(fX)\cdot (g\omega) = X\cdot(fg)~\omega + (fg) \mathrm{div}(X)~\omega = 0~.$$ In other words, we want the function $h=\log(fg)$ to satisfy $$X\cdot h = -\mathrm{div}(X)~.$$

This is a dynamical question: we ask whether the function $\mathrm{div}(X)$ is a coboundary along the flow of $X$. Of course a gradient flow does not have very rich dynamics, but a saddle point is already too much for the following reason:

Assume $X$ has a saddle point. Then one can find sequences $(x_i)$ and $(y_i)$ which are bounded in $M\backslash S$ such that $y_i$ is on the trajectory of $x_i$ along the flow of $X$, and such that the trajectory from $x_i$ to $y_i$ is very long and spends most of its time very close to the saddle point $s$.

Assume now that we have $h:M\backslash S \to \mathbb R$ such that $\mathrm{div}(X)= X\cdot h$. Then $$\int_{x_i}^{y_i} \mathrm{div}(X) = h(y_i)-h(x_i)$$ is bounded independently of $i$. (Here $\int_{x_i}^{y_i} \mathrm{div}(X)$ denotes the integral of the function $\mathrm{div}(X)$ along the trajectory of $X$ from $x_i$ to $y_i$.)

But, on the other side, since this trajectory spends a very long time close to $s$, we have that $$\int_{x_i}^{y_i} \mathrm{div}(X)\underset{i\to +\infty}{\longrightarrow} \pm \infty$$ as soon as $\mathrm{div}(X)(s) \neq 0$. This is a contradiction.

Nicolast
  • 1,908
  • Thank you for your answer. I think I am missing some thing: Put $f(x,y)=x^2-y^2$ then it has a saddle point at the origin and its divergence is zero: $\nabla f=2x\partial_x-2y\partial_y$. Right? – Ali Taghavi Feb 12 '21 at 12:49
  • 1
    It is divergence free indeed, including at the saddle point $(0,0)$! My argument applies for instance if you take $g(x,y) = 2x^2 - y^2$, whose gradient has divergence $1$. – Nicolast Feb 12 '21 at 16:10
  • Yes. But in your argument, there is no any restriction on saddle rotation $\lambda_1/\lambda_2$. So in principal it should work for every arbitrary saddle, including saddle ration=-1. So I think some thing is missing in your argument. Right? – Ali Taghavi Feb 12 '21 at 17:56
  • 1
    To show that the integral of the divergence along a trajectory which spends a lot of time close to the saddle point diverges, I use that the divergence at the saddle point is non-zero (i.e. saddle ratio $\neq -1$ if you want). Your example shows that this condition is necessary. – Nicolast Feb 12 '21 at 21:52
  • My sincere apology for not reading carefuly your answer. – Ali Taghavi Feb 12 '21 at 21:57
  • Thank you again for your answer and your very interesting approach. The situation is similar to the method of proof that the stability or unstability of a hyperbolic homoclinic loop is determined by the sign of divergence at the vertex of the loop, Your very interesting answer remind me of the early proof of the "Finiteness theorem of Limit cycles" when all saddle points are hyperbolic. – Ali Taghavi Feb 12 '21 at 22:07
  • But i have still another question. according to 1) in my question we may remove singularities so there is no guarantie for h to be bounded. Right? – Ali Taghavi Feb 12 '21 at 22:11
  • By "$(x_i)$ and $(y_i)$ bounded in $M\backslash S$" I meant that they remain in a compact subset of $M\backslash S$. I claim that you can always find them like that (take $x_i$ converging to a point of a trajectory that converges to $s$). – Nicolast Feb 12 '21 at 22:21
  • Very beautiful argument thank you! – Ali Taghavi Feb 12 '21 at 23:25
  • In fac if I understand correctly you fix two transversal sections on stable and unstable manifolds with intersection points p,q then we let $x_i$ and $y_i$ tends to p and q repectively. – Ali Taghavi Feb 12 '21 at 23:47