3

I need to find the eigenvalues of the stationary solution ($\dot{u}=0$) of the following PDE

$ \dot{u}=-d u + (d-2)x u'-v(d)L_{d}^{1}(u'+2x u'') $ with $ L_{d}^{n}(f)=\int_0^\infty dy~y^{-1+d/2}\left(\frac{-2 a(1+y)e^{-y}}{ f + y + a e^{-y}}\right) $

where $\partial_t f =\dot{f}$, $\partial_x f = f'$, $d=3$ and $a$ is a parameter, now set to $a=1$. Finally, $v(d)$ is a dimension dependent parameter, to be defined later. We need two initial conditions to solve the resulting equation but the two are not independent, if we set $u'(0) = m^2$, then $u(0)= - \frac{v(d)}{d} L_{d}^{1}(m^2)$. We are not done yet, because we need to vary $m^2$ to find the true solution. For a general value of $m^2$, the solution $u^*(x)$ is going to diverge at a finite value of $x$. As $m^2$ gets closer to its 'true' value, $u^*(x)$ diverges at higher and higher $x$. I find the 'true' values of $m^2$ by dichotomy (plot the solutions and change $m^2$ accordingly) and then if i found the correct $m^2$ say, for up to 4 digits of precision, then i can look for the eigenvalues. I have two questions:

  • Is there an elegant way to find the correct $m^2$?
  • Is there a more efficient way to do this, than my implementation? Or maybe one that can easily be generalized for coupled PDE-s.
Clear[intL, v];
v[d_] := (2^(d + 1) \[Pi]^(d/2) Gamma[d/2])^-1
intL[d_, n_, \[Alpha]_?NumericQ][func_?NumericQ] := 
 intL[d, n, \[Alpha]][
   func] = -2 \[Alpha] NIntegrate[
    y^(d/2 - 1) ( E^-y (1 + y)  (y + E^-y \[Alpha] + func)^-n), {y, 
     0, \[Infinity]}, 
    Method -> {Automatic, "SymbolicProcessing" -> False}]
statEQ[d_, \[Alpha]_][r_] := -d u[r] + (d - 2) r u'[r] - 
  v[d] intL[d, 1, \[Alpha]][u'[r] + 2 r  u''[r]]
Clear[rStart, rEnd, bc, psol, m2, alpha];
rStart = 10^-6; rEnd = 2; alpha = 1; dim = 3;
bc = {u[rStart] == -(1/(24 \[Pi]^2)) intL[dim, 1, alpha][m2], 
   u'[rStart] == m2};
psol = ParametricNDSolveValue[{statEQ[dim, alpha][\[Rho]] == 0, bc}, 
   u, {\[Rho], rStart, rEnd}, {m2}, 
   Method -> {"EquationSimplification" -> "Residual", 
     "ParametricCaching" -> None, "ParametricSensitivity" -> None}];

(I can't directly give the initial conditions at x=0, so I chose a small rStart) Findig a good enough $m^2$ by dichotomy:

mIni = -0.2648;
fsol = psol[mIni]; // AbsoluteTiming
(*Plot[Evaluate[fsol[rho]],{rho,rStart,rEnd},PlotRange\[Rule]All]*)

Finally, obtaining the eigenvalues:

(Derivative[1]@intL[d_, n_, \[Alpha]_])[
  func_] := (Derivative[1]@intL[d, n, \[Alpha]])[
   func] = -n intL[d, n + 1, \[Alpha]][func]
linearizedDE = 
  Coefficient[
    Series[statEQ[dim, alpha][\[Rho]] /. {u[\[Rho]_] :> 
        u[\[Rho]] + \[Epsilon] \[Delta]u[\[Rho]], (Derivative[n_]@
           u)[\[Rho]_] :> (Derivative[n]@
            u)[\[Rho]] + \[Epsilon] (Derivative[
              n]@\[Delta]u)[\[Rho]]}, {\[Epsilon], 0, 
      1}], \[Epsilon]] /. u -> fsol;
linEnd = 1;
{om, nu} = 
   NDEigenvalues[
    linearizedDE, \[Delta]u[\[Rho]], {\[Rho], rStart, linEnd}, 2, 
    Method -> {"SpatialDiscretization" -> {"FiniteElement", 
        "MeshOptions" -> {"MaxCellMeasure" -> (linEnd - rStart)/
           10^3 }}}]; // AbsoluteTiming 
user21
  • 39,710
  • 8
  • 110
  • 167
dzsoga
  • 341
  • 1
  • 8
  • This is not PDE, this is integrodifferential equation. Your problem is not state clear. What is domain for solution of you equation? Is it {t,0,Infinity}, {x,0,2}? What do you try to minimize with respect to m? – Alex Trounev Mar 05 '21 at 13:22
  • In principle the domain is {t,0,Infinity}, {x,0,Infinity}. By fine tuning the parameter m^2 we can increase the range {x,0, something(m^2) }, where you can obtain a solution where the function u does not diverge. If I set a high rEnd, the ndsolve will signal, that the system became stiff. In that sense, you do not minimize anything w.r.t m^2, you need to fine tune it. – dzsoga Mar 05 '21 at 13:40
  • What do you expecting at x->Infinity? Is it zero or some limited value? – Alex Trounev Mar 05 '21 at 13:45
  • That's the problem, I don't know, how u should behave in the limit x - > Infinity. I do know however, that u=0 at some value x_0 and this x_0 increases as m^2 is tuned. In fact i have tried FindRoot[psol[m2] [x_0], {m2, - 0.1}] for a large value of x_0 to find a good enough m^2,but it did not work. – dzsoga Mar 05 '21 at 13:58
  • 1
    But intL[3, 1, 1][0] exists, therefore there is limited value at infinity u=-v[3]/3 intL[3, 1, 1][0]=0.0102017. Then we can normalize solution so that u->0 at infinity and use 2 conditions u[0]=-0.0102017 and u'[rEnd]=0 with rEend=10 for example in numerical computation. – Alex Trounev Mar 05 '21 at 14:15
  • Yes, it can be done definitely. – dzsoga Mar 05 '21 at 14:25
  • Sorry, I don't understand you solution. First is about rEnd=2. Where you take this range? Second is about linEnd = 1. Why did you take this range less then rEnd? And final question is about mIni = -0.2648. Is this value been taken only because you try to reproduce eigenvalues -1.524 and 0.648? It looks like some kind of play with unknown rules. Then how we can answer your question? :) – Alex Trounev Mar 06 '21 at 21:34
  • Sorry, I was not clear enough. So: with a given value of mIni, the solution fsol becomes stiff or singular at some x. Then mIni is tuned to increase this x as much as possible. So I choose rEnd sufficiently high, to be able to see this x value and simply set linEnd to be slightly smaller or equal to this x. As this x approaches a sufficiently high value, the eigenvalues nu and om approach an asymptotic value. For example if I tune x as large as possible with 4 digits of precision in mIni, then I expect om and nu to be precise up to 4 digits of precision. In this sense the only unkown is mIni. – dzsoga Mar 06 '21 at 22:38
  • Did you pay attention that intL[3,1,1][f] also has singular point at about f=-1 and then highly oscillates at f<-1? – Alex Trounev Mar 06 '21 at 22:48
  • Yes, only this cause the system to become stiff or singular. Thanks for pointing out, this gave me the idea to stop the ndsolve, when the denominator of the integral intL become small. – dzsoga Mar 07 '21 at 09:42
  • 1
    Not denominator, but parameter func becomes close to -1. – Alex Trounev Mar 07 '21 at 10:13

2 Answers2

2

There is exact solution of stationary problem u=-v[3]/3 intL[3, 1, 1][0]=0.0102017, therefore we can consider eigensystem around this solution as follows

Clear[intL, v];
v[d_] := (2^(d + 1) \[Pi]^(d/2) Gamma[d/2])^-1
intL[d_, n_, \[Alpha]_?NumericQ][func_?NumericQ] := 
 intL[d, n, \[Alpha]][
   func] = -2 \[Alpha] NIntegrate[
    y^(d/2 - 1) (E^-y (1 + y) (y + E^-y \[Alpha] + func)^-n), {y, 
     0, \[Infinity]}, 
    Method -> {Automatic, "SymbolicProcessing" -> False}]
statEQ[d_, \[Alpha]_][r_] := -d u[r] + (d - 2) r u'[r] - 
  v[d] intL[d, 1, \[Alpha]][u'[r] + 2 r u''[r]]
Clear[rStart, rEnd, bc, psol, m2, alpha];
rStart = 10^-6; rEnd = 10; alpha = 1; dim = 3;
bc = {u[rStart] == 0.010201726677643571, u'[rStart] == 0};
sol = NDSolveValue[{statEQ[dim, alpha][\[Rho]] == 0, bc}, 
   u, {\[Rho], rStart, rEnd}];
 (Derivative[1]@intL[d_, n_, \[Alpha]_])[
  func_] := (Derivative[1]@intL[d, n, \[Alpha]])[
   func] = -n intL[d, n + 1, \[Alpha]][func]
linearizedDE = 
  Coefficient[
    Series[statEQ[dim, alpha][\[Rho]] /. {u[\[Rho]_] :> 
        u[\[Rho]] + \[Epsilon] \[Delta]u[\[Rho]], (Derivative[n_]@
           u)[\[Rho]_] :> (Derivative[n]@
            u)[\[Rho]] + \[Epsilon] (Derivative[
              n]@\[Delta]u)[\[Rho]]}, {\[Epsilon], 0, 
      1}], \[Epsilon]] /. u -> sol;
linEnd = 10;
{om, nu} = 
   NDEigensystem[{linearizedDE, 
     DirichletCondition[\[Delta]u[\[Rho]] == 0, 
      True]}, \[Delta]u[\[Rho]], {\[Rho], rStart, linEnd}, 
    10]; // AbsoluteTiming

Note, that we can take rEnd, linEnd arbitrary in this case. Visualization

Table[Plot[Evaluate[ReIm[nu[[i]]]], {\[Rho], rStart, linEnd}, 
  PlotLabel -> om[[i]], PlotRange -> All], {i, 10}]

Figure 1

Also we can compute solution of bouncing type. Since intL[3,1,1][f] has singular point at f=-1 we can suppose that there is a critical point solution we detect with WhenEvent[] as follows

v[d_] := (2^(d + 1) \[Pi]^(d/2) Gamma[d/2])^-1
intL[d_, n_, \[Alpha]_?NumericQ][func_?NumericQ] := 
 intL[d, n, \[Alpha]][
   func] = -2 \[Alpha] NIntegrate[
     y^(d/2 - 1) (E^-y (1 + y) (y + E^-y \[Alpha] + func)^-n), {y, 
      0, \[Infinity]}, 
     Method -> {Automatic, "SymbolicProcessing" -> False}] // Quiet
statEQ[d_, \[Alpha]_][r_] := -d u0[r] + (d - 2) r u0'[r] - 
  v[d] intL[d, 1, \[Alpha]][u0'[r] + 2 r u0''[r]]
rStart = 0; rEnd = 3.97; alpha = 1; d = 3; dim = 3;

r0 = 10^-6; m = -0.2648; eq = {-d u0[r] + (d - 2) r u0'[r] - v[d] intL[d, 1, 1][u0'[r] + 2 r u0''[r]] == 0}; nds = NDSolveValue[{eq, u0'[r0] == m, -d u0[r0] - v[d] intL[d, 1, 1][u0'[r0]] == 0, WhenEvent[u0[r] == 0, {rc = r, u0'[r] -> -u0'[r]}]}, u0, {r, r0, rEnd}, Method -> {"EquationSimplification" -> "Residual"}, MaxSteps -> Infinity]

We plot this solution and f=u0'[r] + 2 r u0''[r] to show that solution passes point f=-1 and problem becomes stuff at r > rEnd

{Plot[nds[r],{r, r0, rEnd}, PlotRange -> All],
Plot[nds'[r] + 2 r nds''[r], {r, r0, rEnd}, PlotRange -> All]} 

Figure 2

Now we have position of the critical point rc=1.03115 therefore we can compute eigensystem in a range r0 <= r <=rc

(Derivative[1]@intL[d_, n_, \[Alpha]_])[
  func_] := (Derivative[1]@intL[d, n, \[Alpha]])[
   func] = -n intL[d, n + 1, \[Alpha]][func]
linearizedDE = 
  Coefficient[
    Series[statEQ[dim, alpha][\[Rho]] /. {u0[\[Rho]_] :> 
        u0[\[Rho]] + \[Epsilon] \[Delta]u[\[Rho]], (Derivative[n_]@
           u0)[\[Rho]_] :> (Derivative[n]@
            u0)[\[Rho]] + \[Epsilon] (Derivative[
              n]@\[Delta]u)[\[Rho]]}, {\[Epsilon], 0, 
      1}], \[Epsilon]] /. u0 -> nds;
linEnd = rc; NDEigenvalues[linearizedDE, \[Delta]u[\[Rho]], {\[Rho], rStart, 
  linEnd}, 2, 
 Method -> {"SpatialDiscretization" -> {"FiniteElement", 
     "MeshOptions" -> {"MaxCellMeasure" -> (linEnd - rStart)/10^2}}}]

Finally we got eigenvalues {0.636452, -1.52897}, and the question is how we can optimize initial value m. The answer depends on physics of this problem, for instance, we can make suggestion about minimum of potential in a critical point.

Alex Trounev
  • 44,369
  • 3
  • 48
  • 106
  • It does not seem to be correct to give u'[rStart]=0 as initial condition. The function sol in your answer is a consant function (which is indeed a stationary solution but I'm looking for the non-trivial one) and therefore the eigenvalues are not correct either. The resulting u function has to be something like fsol in my question and the first two eigenvalues om and nu are also the correct (or very close to the correct) ones in my question. – dzsoga Mar 05 '21 at 18:33
  • 1
    @dzsoga What do you mean under nontrivial? If solution is unique, then there is only one solution u=-v[3]/3 intL[3, 1, 1][0]=0.0102017 not diverges at x->Infinity. If solution not unique then there are many solutions non restricted at x->Infinity and therefore your problem not defined well. Hence you need to define the problem so that the solution is unique. Your own solution diverges at x->Infinity. – Alex Trounev Mar 05 '21 at 18:48
  • The function u here is a thermodynamic potential. The t variable the reduced temperature. You are right, the PDE for it has multiple stationary solutions. In fact it has two. The solutions are distinguished by their eigenvalues(critical exponents). The nontrivial u solution corresponds to a second order phase transition. Please observe, how my fsol changes as mIni is changed. – dzsoga Mar 05 '21 at 20:04
  • @dzsoga How did you derive this equation? – Alex Trounev Mar 05 '21 at 20:31
  • It is the Wetterich equation for a Z_2 symmetric scalar model (which happens to be in the Ising universality class) at the Local Potential Approximation of the effective action. – dzsoga Mar 05 '21 at 20:39
  • @dzsoga Please, see update to my answer. – Alex Trounev Mar 06 '21 at 18:10
  • Thank you for your update. It is not a matter of accuracy however. The first two eigenvalues should (known to) be -1.524 and 0.648 (as can be seen from om and nu, when my code is ran). An equivalent rephrasing of the BC in my question is something like u'[potmin]=0 and normalize u so that u[potmin]=0. In this case one has to finetune the position potmin rather than m^2. I will have more computer time from tuesday. I will also add an answer then. – dzsoga Mar 06 '21 at 19:34
  • @dzsoga How you know that two eigenvalues should be -1.524 and 0.648? Is it from some paper? – Alex Trounev Mar 06 '21 at 19:42
  • Yes. The negative one is -1/nu, where nu is the critical exponent of the correlation length, and the second one is the first subleading correction to the scaleing of the correlation length. This is a leading order prediction. These exponents, or the nontrivial fixed point can be obtained by an other method: if you consider u(x) to be a polynomial and you can decompose the PDE to a set of coupled ODEs for the coefficients of this polynomial function. E.g. I used this in my question: https://mathematica.stackexchange.com/q/229458/67601 – dzsoga Mar 06 '21 at 20:49
1

My idea is to use the domain option of the interpolation function obtained from the ndsolve. Here I set a very large value rEnd. rEnd is arbitrary, but has to be sufficiently large.

rootEq[\[Alpha]_, mass2_?NumericQ] := 
  Block[{paramSol, rhoend, rStart, rEnd, alpha, dim, fsol, delta},
   delta = 10^-6;
   rStart = delta; rEnd = 10; alpha = \[Alpha]; dim = 3;
   paramSol = 
    ParametricNDSolveValue[{statEQ[dim, alpha][\[Rho]] == 0, bc}, 
     u, {\[Rho], rStart, rEnd}, {m2},
     Method -> {"EquationSimplification" -> "Residual", 
       "ParametricCaching" -> None, "ParametricSensitivity" -> None}];
   fsol = Quiet@paramSol[mass2];
   Return[fsol["Domain"][[1, 2]]];
   ];

One then can visualize the end r*, where the solution of the differential equation becomes stiff:

dat = ParallelTable[{mass,rootEq[1,mass]}, {mass, -0.200, -0.300, -0.005}]; 
ListPlot[dat]

enter image description here

The x axis shows the parameter m^2, while the y axis shows the corresponding r*. As you can see, this is a quite narrow resonance curve, and $r* \to \infty$ as the corresponding $m^2_*$ is approached.

Using the function rootEq, one can simply use findroot, such as

bignumber=10;
FindRoot[rootEq[1, mass2] == bignumber, {mass2, mIni} ]

The idea is that a more precise $m^2$ corresponds to a larger $r*$. This method, with the current precision settings (basic machine precision everywhere) breaks down at around $r^*=1.8$ (corresponding to $m^2 = -0.264812$). As the workingprecison, precisiongoal and accuracygoal are increased, we can approach a higher $r^*$ using this findroot method. But now, I came across a nother problem. The eigenvalues I was looking for (and obtained from NDEigenvalues) are more sensitive to the MaxCellMeasure option, than the precision of $m^2$ beyond the 4th digit of precision... :D

dzsoga
  • 341
  • 1
  • 8