9

I have an integrand that looks like this:

enter image description here

the details of computation are complicated but I only know the integrand numerically (I use NDSolve to solve second order ODE). The integrand is not simply the solution of my ODE either; calling the two solutions of my ODE osc1[s], osc2[s] then schematically the integrand I have looks something like exp(-is)[g(s)osc1[s]osc2*[C-s]+f(s)osc2[s]osc2*[C-s]]. The exp bit is only very slowly oscillating over my integration range, it is really osc1,osc2 that give wild oscillation, as a certain parameter they depend on gets larger.

More explicitely

rstar[r_] := r + 2 M Log[r/(2 M) - 1];
M=1;
rinf=10000;
rH = 200001/100000;
r0 = 10;
wp=40;
ac=wp-8;
\[Lambda][l_] = l (l + 1);

eq[\[Omega]_,l_] := \[CapitalPhi]''[r] + (2 (r - M))/(
r (r - 2 M)) \[CapitalPhi]'[
r] + ((\[Omega]^2 r^2)/(r - 2 M)^2 - \[Lambda][l]/(
r (r - 2 M))) \[CapitalPhi][r] == 0;
init=-0.0000894423075560122420468703835499 + 
0.0000447222944185058822813688948339 I;
dinit=-4.464175354293244250869336196691640386266791`30.*^-6 - 
8.950483248390306670770345406047835993931665`30.*^-6 I;

osc1 := \[CapitalPhi] /. 
Block[{$MaxExtraPrecision = 100}, 
NDSolve[{eq[1/10, 1], \[CapitalPhi][rinf] == 
init, \[CapitalPhi]'[rinf] == dinit}, \[CapitalPhi], {r, r0, 
rinf}, WorkingPrecision -> wp, AccuracyGoal -> ac, 
MaxSteps -> \[Infinity]]][[1]];

osc2 is obtained simiarly. Note these are for non problematic params and it will run quick quickly, and not be too badly behaved.

The problem I have is that I only know the integrand to maybe 6-12 digits of precision (dp), depending on the parameters. This is computing the NDSolve with a WorkingPrecision of 50-60, AccuracyGoal->42-52 and it takes around 2 hrs. I want to integrate this with NIntegrate, but when my parameters are large (and the oscillation is very high) I usually only know the integrand around the 6 dp end of scale, and NIntegrate wants a greater WorkingPrecision than this otherwise it complains (since oscillation is also getting very large).

I can force it to do the integral by making the WorkingPrecision higher, but I think this is cheating if I don't believe my integrand any higher than 6 dp?

The only ideas I've had so far are to try different rules. Are there any rules people would recommend for doing such oscillatory integrands? So far I've tried "LevinRule", "ClenshawCurtisRule", "GaussKronrodRule" but none seem to compute it any quicker than just the default. They all agree up to a reasonable number of dp, so no idea if I should just stick to the default, or if there is something better one could do with such an integrand. Speed is not a concern just accuracy.

UPDATE

Let's say I managed to split my integral into a few different integrals. First give the definitions:

vbar[tau_?
NumericQ] := (4 M) ((tau/tauh)^(1/3) + 1) Exp[-(tau/tauh)^(1/3) + 
 1/2 (tau/tauh)^(2/3) - 1/3 (tau/tauh)];
ubar[tau_?
NumericQ] := -(4 M) ((tau/tauh)^(1/3) - 1) Exp[(tau/tauh)^(1/3) + 
 1/2 (tau/tauh)^(2/3) + 1/3 (tau/tauh)];
rtau[tau_?NumericQ] := (2 M) (tau/tauh)^(2/3);

in addition to those made above, then I think I can give my integral as a sum of integrands that look like this

Exp[-I s] (ubar[tau_f - s])^(-i 4/10)Exp[+i 1/10 rstar[tau_f - s]]osc1[rtau[tauf-s]]*

here tau_f constant. The first part is an amplitude, the osc1 satisfies the linear ODE given above. I think this has Levin potential if I can work out how to input the LevinRules given the above second order ODE? (Here and in the above I fix my parameters the ODE depends on to (1/10,1) to simplify giving the ICs but I don't that detracts from the main problem). Would need to work out what the Kernal is from the ODE above.

fpghost
  • 2,125
  • 2
  • 21
  • 25
  • What about rewriting your differential equations to directly give the integral from NDSolve? – celtschk Oct 16 '12 at 17:34
  • Have you tried a Fourier on that ? – Dr. belisarius Oct 16 '12 at 17:34
  • Have you seen this video ? It talks about similar problem and hybrid numeric-symbolic methods to address it. – Vitaliy Kaurov Oct 16 '12 at 17:43
  • A common symbolic trick is to rewrite the integral using integration by parts. Even if it is still symbolically unsolveable, you may be able to symbolically solve the highly oscillating component. – Searke Oct 16 '12 at 18:41
  • celtschk: Not sure I can do that, the interpolating function solution to NDSolve is just one component that goes into make my integrand and integration is in a different variable than ODE. belisarius: how do I do that is there a Method->"FourierRule" type of thing? Vitaly: thanks I will take a look. – fpghost Oct 16 '12 at 19:02
  • Searke: the highly oscillating part of my integrand isn't known symbolically, I only know it as the interpolating function obtained from NDSolve of my ODE. – fpghost Oct 16 '12 at 19:04
  • 1
    "I can force it to do the integral by making the WorkingPrecision higher, but I think this is cheating if I don't believe my integrand any higher than 6 dp?" <-- Definitely. You will need the integrand to higher precision. – Andrew Moylan Oct 17 '12 at 02:23
  • Does the oscillatory part of your integrand satisfy a linear ODE? – Andrew Moylan Oct 17 '12 at 02:24
  • No unfortunately not. – fpghost Oct 17 '12 at 18:37
  • splitting my integral ranges by hand seems to remove the slwcon errors, not sure if that just means each has a low error, but the total still has just has much as if I did it in one go. – fpghost Oct 17 '12 at 19:49
  • 1
    @fpghost: Hint: If you want people to be notified of your answers to their comments, prepend @ to the user name of the user you reply to (as I did with yours in this comment, although in this case it's not strictly necessary because question/answer authors always get notified about comments on their post). – celtschk Oct 19 '12 at 07:55
  • @celtschk thanks! – fpghost Oct 19 '12 at 08:08

3 Answers3

16

This is how you manually invoke "LevinRule" when you know part of the integrand is a rapidly oscillatory function satisfying a linear ODE:

First, a rapidly oscillatory function:

In[25]:= osc = 
 y /. NDSolve[{y''[x] - (x^2 - 3 x) y'[x] + 10000 y[x] == 0, 
     y[0] == 3, y'[0] == 1}, y, {x, 0, 5}] // First

Out[25]= InterpolatingFunction[{{0.,5.}},<>]

In[26]:= Plot[osc[x], {x, 0, 5}]

enter image description here

The integrand is f[x]*osc[x]:

In[27]:= f[x_] := x^2

Regular NIntegrate (it can't detect that your purely numerical InterpolatingFunction has a special oscillatory form):

In[28]:= NIntegrate[f[x] osc[x], {x, 0, 5}] // Timing

Out[28]= {0.116796, -2.80375}

Manually invoke "LevinRule":

In[29]:= NIntegrate[f[x] osc[x], {x, 0, 5}, 
  Method -> {"LevinRule", "AdditiveTerm" -> 0, 
    "Amplitude" -> {f[x], 0}, "Kernel" -> {osc[x], osc'[x]}, 
    "DifferentialMatrix" -> {{0, 1}, {-10000, x^2 - 3 x}}}] // Timing

Out[29]= {0.032645, -2.80375}

Note that the first step, actually constructing an interpolation of the rapidly oscillatory function, is often the most costly in a scheme like this.

For more information about how "LevinRule" works in NIntegrate see this part of the documentation.

J. M.'s missing motivation
  • 124,525
  • 11
  • 401
  • 574
Andrew Moylan
  • 4,260
  • 22
  • 25
  • Thanks. My integrand looks like Exp[-iEs] w[s], where it's actually not the Exp[..] that causes rapid oscillation it's the w[s] bit. This w[s] doesn't directly solve an ODE but it's constructed by taking the modulus of two particular solutions of an ODE (with some other complicated factors). It's these solutions of the ODE that are highly oscillatory. Given this, would it be possible for me to form the "DifferntialMatrix" ? – fpghost Oct 17 '12 at 08:08
  • Using your example it's a bit like my integrand was Exp[-is] (g(s) osc1[C]osc1[C-s]+h(s) osc2[C]osc2[C-s]) , where C is some constant, g(s),h(s) are just functions of integration var, and osc1, osc2 are two distinct solutions to the ODE for different ICs. – fpghost Oct 17 '12 at 08:12
  • I'm finding the very strange behaviour that for a given integrand, if I stick with WorkingPrecision->5. Then the NIntegrate goes through with PrecisionGoal->3, but gives ::slwcon with PrecisionGoal->1. This is the opposite of what I would have expected, why could this happen? – fpghost Oct 17 '12 at 18:13
  • I don't think my integrand osc bit satifies a differential Kernal, so don't know what to put there, but If I try Method->{"LevinRule"} alone I see no improvement in timing or ::slwcon errors going away. If I try Method->{"LevinRule","Kernal"->w[s]} where w[s] is the oscillatory bit, I get ::nonlev errors, saying Integrand not a Levin function. If I try Method->{"LevinRule","ExcludedForms"->Exp[-iEs]} then again no increase in speed and ::slwcon errors still present. Not sure what to do as I can't really increase the number of good decimals of my integrand either, as this is limited by NDSolve – fpghost Oct 17 '12 at 18:21
  • If the oscillatory part of your integrand satisfies a nonlinear rather than linear ODE, then LevinRule can't be applied. – Andrew Moylan Oct 18 '12 at 05:37
  • LevinRule definitely won't be able to automatically detect the linear ODE of the oscillatory part of your integrand if it is solely in the form of an Interpolating function. – Andrew Moylan Oct 18 '12 at 05:37
  • "This w[s] doesn't directly solve an ODE but it's constructed by taking the modulus of two particular solutions of an ODE (with some other complicated factors)." It sounds like the w[s] doesn't satisfy a linear ODE then. However, you may be able to reformulate in terms of e.g. the modulus of an integral or two, rather than the integral of something with a modulus? Tricks like this (in which you keep everything analytic/smooth as long as possible) can be helpful to let you use standard numerical methods. – Andrew Moylan Oct 18 '12 at 05:40
  • OK. I if it really turns out to just be Exp(-is) xBlackOscillatoryBoxInterpolatingFunc[..]. Is there anything I can do at all? Is it still worth putting in Method->"LevinRule"; are there things like putting up the MaxRecursion etc worth a go? or other options in NIntegrate that may help.. – fpghost Oct 18 '12 at 07:30
  • Also I've come across the strange feature that if I actually give NIntegrate PrecisionGoal->3, WorkingPrecision->5, it doesn't complain sometimes, but give it the same integrand with Pg->3, Wp->10 and it says ::slwcon and ::ncvb; why would it do this? – fpghost Oct 18 '12 at 07:37
  • With sufficiently few digits of WorkingPrecision, roundoff error may lead to spurious appearance of convergence. Generally I would suggest WorkingPrecision at least 16 (i.e. around MachinePrecision) at 10 greater than PrecisionGoal. – Andrew Moylan Oct 18 '12 at 14:05
  • No, LevinRule will not help if it's really a black box. – Andrew Moylan Oct 18 '12 at 14:06
6

One might consider using the simple-minded strategy of splitting the known oscillatory part over its roots (or extrema), evaluating the integral over the intervals determined by the roots, and summing all those integrals to arrive at the actual integral you need.

Now, finding the roots of an oscillatory function that is only known through its differential equation is easily done, thanks to the "EventLocator" functionality built into NDSolve[]. Here's how to apply it to Andrew's example:

{osc, rts} = Reap[y /. First @ NDSolve[
                       {y''[x] - (x^2 - 3 x) y'[x] + 10^4 y[x] == 0, y[0] == 3, y'[0] == 1},
                       y, {x, 0, 5}, 
                       Method -> {"EventLocator", "Event" -> y[x], "EventAction" :> Sow[x],
                                  Method -> "StiffnessSwitching"}]];
rts = First[rts];

One might want to verify that all the roots within the integration interval were captured. Here's one way:

Plot[osc[x], {x, 0, 5}, Axes -> None, Frame -> True,
     Epilog -> {Directive[Red, AbsolutePointSize[2]], Point[{#, osc[#]} & /@ rts]}]

plot of oscillatory function and roots

Having done this, here's how to evaluate the integral $\int_0^5 x^2\mathtt{osc}[x]\;\mathrm dx$:

Total[NIntegrate[x^2 osc[x], {x, ##}] & @@@ Partition[Union[Flatten[{0, rts, 5}]], 2, 1],
      Method -> "CompensatedSummation"]
   -2.802321164674166

In the interest of making my post a lot less useless than it seems to be, here's how to adapt the approach given above when the function multiplying the osc[x] in fpghost's answer is also oscillatory (note the increased WorkingPrecision setting):

{osc, rts} = Reap[y /. First@NDSolve[{
                  y''[x] + (2 (x - 1))/(x (x - 2)) y'[x] + ((100 x^2)/(x - 2)^2 -
                  2/(x (x - 2))) y[x] == 0, y[5] == 3, y'[5] == 1},
                  y, {x, 5, 10},
                  Method -> {"EventLocator",
                             "Event" -> {Re[Exp[-I x] y[x]], Im[Exp[-I x] y[x]]},
                             "EventAction" :> Sow[x],
                             Method -> "StiffnessSwitching"}, WorkingPrecision -> 25]];

rts = Union[First[rts], SameTest -> (Chop[#1 - #2] == 0 &)];

Total[NIntegrate[Exp[-I x] osc[x], {x, ##}, WorkingPrecision -> 20] & @@@
      Partition[Union[Flatten[{5, rts, 10}]], 2, 1], Method -> "CompensatedSummation"]
   -0.048159751342842237133 + 0.045326948711103488692 I
J. M.'s missing motivation
  • 124,525
  • 11
  • 401
  • 574
  • Looks like a nice method, so by splitting between the roots then individually NIntegrating these regions and summing up at end we expect better accuracy for some reason? – fpghost Oct 17 '12 at 08:54
  • The idea is that since you're integrating over a region where the integrand is always nonnegative or always nonpositive, the component integrals are computed pretty accurately, and any subtractive cancellation will only manifest when you sum up these integrals. – J. M.'s missing motivation Oct 17 '12 at 09:11
  • I see, I really like the idea, I will keep it in mind for future. Unfortunately for me I am not directly integrating the solution to my ODE, (the integrand is more like Exp[-is] (g(s) osc1[C]osc1[C-s]+h(s) osc2[C]osc2[C-s]) , where C is some constant, g(s),h(s) are just functions of integration var, and osc1, osc2 are two distinct solutions to the ODE for different ICs) so the roots of osc don't always correspond to roots of the ODE.sigh. – fpghost Oct 17 '12 at 09:18
  • Then, you might be interested in this for finding the zeroes... (the Exp[-I s] is oscillatory in itself, so the zeroes of your integrand will certainly not correspond to the zeroes of the solution(s) of your DE). Could you maybe edit your question to mention your real problem? – J. M.'s missing motivation Oct 17 '12 at 09:28
  • I updated with the schematic. – fpghost Oct 17 '12 at 18:11
  • You still haven't mentioned the actual ODE, your $f(s)$ and $g(s)$, and typical values for $C$... – J. M.'s missing motivation Oct 17 '12 at 18:16
  • I really don't think it will help but I've now given osc1 in input form so you should be able to reproduce that. I will update some more now – fpghost Oct 17 '12 at 18:22
  • the f,g functions are pages and pages so I really couldn't put them up, but they are not problematic, I think the problem reduces to Exp(-is)W(s) where W(s) is rapidly oscillatory and doesn't satisfy any ODE and is known only to between 6-10dp. – fpghost Oct 17 '12 at 18:32
  • Then, as I said in a previous comment, you'll be interested in those routines for finding multiple zeroes... – J. M.'s missing motivation Oct 17 '12 at 18:35
  • I will certainly give that a try. I was more just wondering if there was NIntegrate method that could perhaps work with lower WorkingPrecision or if I could somehow do something like set a higher MaxRecursion, I don't know...but maybe not – fpghost Oct 17 '12 at 18:40
  • That'd be the brute-force route... you might get a result, but it likely won't be the best way to go about it. – J. M.'s missing motivation Oct 17 '12 at 18:49
  • I think my integrand really is like exp(-is)*BlackOscillatoryBox though for all intents and purposes. Your roots method does sound great if that would work. I tried the Solve method to try and get the roots from the post you linked though; as in Solve[integrand(s)==0 && 0<s<40] and just go out "system cannot be solved with methods available to Solve". – fpghost Oct 17 '12 at 18:56
  • You've tried looking at the other answers that aren't Solve[] based? – J. M.'s missing motivation Oct 17 '12 at 18:57
  • Trying "FindAllCrossings" as we speak. Is there one you'd recommend for my problem? – fpghost Oct 17 '12 at 18:58
  • That can work. Unfortunately, I need to be off; I'll look into this again when I return. – J. M.'s missing motivation Oct 17 '12 at 19:00
  • FindAllCrossing really is a nice function. Unfortunately for me it only finds two of my roots (with workingprecision ->100) in an interval with around 10. Could be a problem with the fact it's an interpolatingfunction I'm feeding it perhaps. – fpghost Oct 17 '12 at 19:10
  • EDIT: looks like putting WorkingPrecision lower solves that of course – fpghost Oct 18 '12 at 07:55
1

Here is a very simple way of expressing my problem and gives the answer I found for this setup:

The ODE:

  eq1 := y''[x] + (2 (x - 1))/(x (x - 2))
  y'[x] + ((100 x^2)/(x - 2)^2 - 2/(x (x - 2))) y[x] == 0;

Solve it:

osc = y /. NDSolve[{eq1, y[5] == 3, y'[5] == 1}, y, {x, 5, 10}] // 
First;

Plot[osc[x], {x, 5, 10}]

simpModes

Give a Levin amplitude

f[x_] := Exp[-I x];

Compare against default methods

NIntegrate[f[x] osc[x], {x, 5, 10}] // Timing

{4.96831,-0.0481038+0.0453335 I}

NIntegrate[f[x] osc[x], {x, 5, 10}, 
Method -> {"LevinRule", "AdditiveTerm" -> 0, 
"Amplitude" -> {f[x], 0}, "Kernel" -> {osc[x], osc'[x]}, 
"DifferentialMatrix" -> {{0, 
   1}, {-((100 x^2)/(x - 2)^2 - 2/(x (x - 2))), (-2 (x - 1))/(
   x (x - 2))}}}] //Timing

{133.84, -0.0481038 + 0.0453335 I}

So I believe that is the Levin implementation of near enough my real problem, it appears slower for this set of parameters, but one can see that putting the '100' in the ODE to '1000' Levin comes into its own.

The problem I now have is that my 'amplitude' factor can also be highly oscillatory. Using the definitions in the 'Update' part of my OP I have

 f[x_,w_]:=Exp[-I (-5-x)] (ubar[x])^(-i 4 w)Exp[+i w rstar[x]]

as an amplitude where x is typically negative.

J. M.'s missing motivation
  • 124,525
  • 11
  • 401
  • 574
fpghost
  • 2,125
  • 2
  • 21
  • 25