4

I would like to prove that only the functions of the form $f(R)\propto 1/R^2$ satisfy the following integral equation (assuming $r>r_0>0$):

$$ \int\limits_{r-r_0}^{r+r_0}\left({r^2\over{2r_0^2}}-{1\over 2}-{R^2\over2r_0^2}\right)f(R)\,dR=0 $$

I have tried the following:

DSolveValue[ Integrate[(r^2/(2*r0^2) - 1/2 - R^2/(2*r0^2))*f[R], 
                       {R, r - r0, r + r0}, Assumptions -> {r > r0 > 0}] == 0, 
             f[R], R]

but Mathematica 11.2 returned it unevaluated.

Any better ideas?

If I substitute the function $1/R^2$, then Mathematica calculates the integral correctly as 0, as expected:

int3[f_] := Integrate[(r^2/(2*r0^2) - 1/2 - R^2/(2*r0^2))*f[R], 
         {R, r - r0, r + r0}, Assumptions -> {r > r0 > 0}];
int3[1/#^2 &]
0
Artes
  • 57,212
  • 12
  • 157
  • 245
  • Perhaps you can use the new AsymptoticDSolveValue or AsymptoticIntegrate in Mma 10.3 to show that any perturbation from the 1/R^2 solution makes the result worse, showing the stationarity of your solution. – Thies Heidecke Apr 04 '18 at 21:04
  • Mathematica this kind integro-equation can't solve.If you exectute :DSolve[Integrate[(r^2/(2*r0^2) - 1/2 - R^2/(2*r0^2))*f[R], {R, r - r0, r + r0}, Assumptions -> {r > r0 > 0}] == a, f[x], x] /. a -> 0 gives a warning message: Supplied equations are not differential or integral equations of the given functions – Mariusz Iwaniuk Apr 04 '18 at 21:57
  • In general Mathematica cannot solve functional equations directly, however with a bit of insight it can be quite helpful also in solving integral equations. – Artes Apr 05 '18 at 03:17
  • @Artes I clicked on "unaccept" by mistake. I did accept it, as I said in the comment below your answer. But I am still trying to understand it fully. – Tigran Aivazian Apr 05 '18 at 16:05

1 Answers1

9

Solutions to integral equations are equivalence classes of functions, i.e. two functions are in the same class if they are different on a (Lebesgue) measure zero subsetset of their domains. Having said that it is reasonable to look for analytic solutions, i.e. functions which are analytic almost everywhere.
Let's rewrite the integral equation: $$ I(f;r,r_0)=\frac{1}{2r^{2}_{0}}\int\limits_{r-r_0}^{r+r_0}\left( r^2-r_{0}^{2}-R^2\right)f(R)\,dR=0 $$ This is a functional equation and if $f$ is analytic in the range $(r-r_0,\;r+r_0)$, then also $I(f;\quad,\quad)$ is analitic with respect to its first and second variables. The integration range is symmetric with respect to the point $R=r$. Let's assume that $f$ is an analytic function, i.e. we assume that there is a range around $R=r$ where $f(R)=\sum_{n=0}^{\infty} a_n (R-r)^n$. Without loss of generality we may assume that the Taylor series is convergent to $f(R)$ in the whole range $(r-r_0,\;r+r_0)$. Now we expand the integral $I(f; r,r_0)$ with respect to $r_0$ obtaining the first three nonvanishing terms:

Collect[1/(2 r0^2) Integrate[ Series[((r - r0) (r + r0) - R^2) f[R], {R, r, 6}], 
                              {R, r - r0, r + r0}, Assumptions -> r > r0 > 0] // Normal,
          r0, Simplify] /. r0 -> Subscript[r, 0] // Most // TraditionalForm

enter image description here

and since $I(f; r,r_0)=0$ (with restrictions given by appropriate assumptions) every coefficient of its expansion w.r.t $\;r_0$ has to vanish. The general solution follows readily from the first coefficient DSolve[ 2 f[r] + r f'[r] == 0, f[r], r], nonetheless in general there might be no solutions. Thus one should find a function $f(R)$ which makes every coefficient vanish, and since higher order coefficients are not independent we can take the first three nonvanishing (symbolically) coefficients.

coef = DeleteCases[ CoefficientList[ 
 1/(2 r0^2) Integrate[ Series[((r - r0) (r + r0) - R^2) f[R], {R, r, 6}], 
                       {R, r - r0, r + r0}, Assumptions -> r > r0 > 0] //
          Normal, r0] // Factor // Most, 0];

sols = 
  Table[ f[r]/.Flatten @ DSolve[ Thread[# == 0&@coef][[k]], f[r], r, 
                                 GeneratedParameters -> (Subscript[c, #, k] &)],
         {k, 3}];
 Union @@@ (sols /. SolveAlways[Equal @@@ Subsets[sols, {2}], r]) /.
 Subscript[c, 1, 3] -> 120 c
 {c/r^2}

We have found that the general solution of the above integral equation is $f(R)= {c\over R^2}$ and recalling the introductory remarks $f(R)$ may differ from ${c\over R^2}$ on a measure zero set.

QED.

Artes
  • 57,212
  • 12
  • 157
  • 245
  • Thank you very much, very elegant, but I am struggling to understand it. The solution for the first coefficient is indeed c/r^2, but the one for the second coefficient is c1/r^2+c2+c3*r, so I don't understand how you force the c2 and c3 to become zeros. Presumably it is what the code "sols=..." does, which is quite complicated. Anyway, I accept your solution and will do my best to understand it now, but if you have any other clarifying/simplifying remarks, please make them :) – Tigran Aivazian Apr 05 '18 at 16:02
  • I understand what "coef=" code does, but what does "sols=" code do? Could you please explain it in a little bit more detail? Thank you. – Tigran Aivazian Apr 05 '18 at 16:11
  • Every coefficient of expansion w,r,t, $r_0$ yields a different symbolic solution $f$, however all those solutions form the same function $f$. This is why I defined coef and sols, namely to get rid of incompatibile constants $c_{i,j}$. Perhaps I'll edit my answer to provide a simpler or more direct approach, however at this point it isn't conceptually difficult, so I guess it might remain. – Artes Apr 05 '18 at 16:11
  • Ok, I understand this. But why is restricting yourself to just the first three coefficients sufficient? Why not four or ten or a million? – Tigran Aivazian Apr 05 '18 at 16:13
  • @TigranAivazian I mentioned in my post that all those equations for vanishing coefficients are not independent, however the code provides neccesary conditions for $f$ to be a solution, perhaps It might be reasonable to make it clearer in the post by providing sufficient conditions, however since you demonstrated in your question that $c/r^2$ is a solution, it dosesn't matter. I made it clear why constants $c_{i,j}$ have to vanish. – Artes Apr 05 '18 at 16:22
  • Yes, yes, I understand your concept --- I am only struggling with the code, not the concepts. Now I understand sols= code (it is just a table of three solutions with constants marked with subscripts). All I need to understand now is the magic performed by this line: Union @@@ (sols /. SolveAlways[Equal @@@ Subsets[sols, {2}], r]) – Tigran Aivazian Apr 05 '18 at 16:24
  • Ah.... I understand now! You have constructed a system of algebraic equations consisting of pairs from the set of three solutions and then asked (by SolveAlways[]) for the values of parameters that satisfy all those equations for all values of r. And then did the purely cosmetic Union on the resulting list of three identical elements just to show that we really have just one element c1,3/(120*r^2) and then another cosmetic transformation to rename c1,3/120 as c. Ok, I understand it completely now and learned a lot of Wolfram stuff, as well as your original and interesting approach. Thank you! – Tigran Aivazian Apr 05 '18 at 16:34
  • You are welcome, I'm glad my answer was helpful. In fact, I could explain more carefully the code, however you've just understood it well. – Artes Apr 05 '18 at 16:38
  • @TigranAivazian Could you explain where did you encounter this integral equation? – Artes Apr 05 '18 at 16:44
  • 1
    Yes, sure, it is connected with the well-known fact from the Newtonian potential theory that the gravitational field inside a hollow sphere is zero. I then generalized the Newtonian law of gravitation to arbitrary (but still radial) f(r) instead of inverse square $1/r^2$ and calculated the force acting on a particle inside such spherical shell. So, what you have proved above is that the absence of force inside the spherical shell automatically implies the inverse square law. And likewise for the electrostatic Coulomb case of a charge inside the hollow charged sphere. – Tigran Aivazian Apr 05 '18 at 16:53
  • @TigranAivazian Thanks for explaining the idea, it is very interesting. I should mention that a similar property we have in General Relativity, i.e. In a hollow sphere the space-time metric (~gravitational potentials) is a part of Minkowski spacetime rather than that of Schwarzschild. This is also connected to the so-called Birkhoff theorem (a vaccum spherically symmetric solution is a part of Schwarzschild space-time) and the fact that there is no gravitational radiation in spherically symmetric vacuum spacetimes. – Artes Apr 05 '18 at 17:13
  • Actually, it would be interesting to verify whether the external part of the inverse shell theorem is also valid, namely, given the radial form of force and demanding that the field outside the spherically symmetric body (not just a shell) has the same value as the field of a point at the centre of the sphere (with the same total mass), prove that the form of the force is $1/r^2 + \Lambda r$. According to WikiPedia this has been already proved by Vahe Gurzadyan in 1985, but it would be nice to use your method and prove it within Wolfram Mathematica. I will try to do that, as an exercise :) – Tigran Aivazian Apr 05 '18 at 20:44
  • 1
    I have used this method to prove the second part of the Inverse Shell Theorem as well and put together all this in a PDF file here http://www.bibles.org.uk/articles/T.Aivazian-Inverse-Shell-Theorem-2018.pdf – Tigran Aivazian Apr 06 '18 at 13:44
  • @TigranAivazian Thanks for sharing it. – Artes Apr 06 '18 at 15:16