9

Suppose I have a function f[x] which is very complicated, together with a function g[f[x]]+h[x] to integrate. That is:

NIntegrate[{f[x], g[f[x]]+h[x]}, {x,0,1}]

I suppose mathematica will calculate f[x] and g[f[x]]+h[x] seperately, thus calculated f[x] twice. How can I speed up the calculation by telling mathematica calculate f[x] only once?

A concrete example suggested by the comments:

ClearAll["Global`*"]

f[x_, y_, z_] := Exp[Sin[x]] + Cos[y + z];

NIntegrate[{f[x, y, z], Sqrt[f[x, y, z]] + x}, {x, 0, 10}, {y, 0, 10}, {z, 0, 10}, 
Method -> {"LocalAdaptive", "SymbolicProcessing" -> 0}, 
PrecisionGoal -> 7] // Timing

{8.85938, {1429.54, 6081.95 + 59.0571 I}}

g[x_, y_, z_] := g[x, y, z] = Exp[Sin[x]] + Cos[y + z];

NIntegrate[{g[x, y, z], Sqrt[g[x, y, z]] + x}, {x, 0, 10}, {y, 0, 10}, {z, 0, 10}, 
Method -> {"LocalAdaptive", "SymbolicProcessing" -> 0}, 
PrecisionGoal -> 7] // Timing

{8.90625, {1429.54, 6081.95 + 59.0571 I}}

In this example the memoization seems not speed up the calcuation...

an offer can't refuse
  • 1,745
  • 12
  • 23
  • You should start with the tutorial on memoization, then move on to Leonid's more extensive compendium which includes links to many related questions. If you find that these methods do not solve your problem you can edit it to describe the specific problem. – Mr.Wizard Jul 14 '16 at 06:22
  • 3
    Seeing your update a vague memory is triggered; if I am remembering right each expression g[x, y, z] and Sqrt[g[x, y, z]] + x will be independently sampled, and unless the chosen meshes are nearly identical, which is unlikely, memoization will not help. If I am further remembering right this essentially means you cannot really improve the code as forcing regular sampling would defeat the efficiency of adaptive sampling. – Mr.Wizard Jul 14 '16 at 06:38
  • You can write a new rule for NIntegrate that uses the function evaluations of f only once. A concrete example for using the f's values only once can be seen in is this package for Lebesgue Integration explained here. – Anton Antonov Jul 14 '16 at 13:05
  • "{#, g[#]+h[x]}&[f[x]]"? – Eric Towers Jul 14 '16 at 15:43
  • @buzhidao Having in mind your concrete application, are you interested in 1D integration only or you also consider higher dimensions? Are you interested in infinite ranges? – Anton Antonov Jul 16 '16 at 14:19
  • @AntonAntonov In application, I really want infinite range. However, I found that assuming infinite range can not always get the right answer. For example, a question you previous answered, the kInfinity is manually assumed, because we are integrate a gaussian-like integral, this assumption is OK as long as kInfinity is larger than a few times of \Delta k. However, if I let kInfinity equals the real Infinity, I found that it is hard for mathematica get the right value. Can we overcome this issue? I.e. If I don't know in a prior that the main contribution of this integral(.....) – an offer can't refuse Jul 19 '16 at 13:57
  • can I still get the answer of that integral? The queesion i mentioned is here: http://mathematica.stackexchange.com/questions/120620/how-to-increase-the-evaluation-speed-of-this-numerical-integral (ps: if you think this comment worth opening a new question, pls let me know) – an offer can't refuse Jul 19 '16 at 13:58
  • @buzhidao My answer does not work for infinite ranges in its current form. I will try to extend it to work with infinite ranges. As for your concern for not getting the right answer when going to infinity, I think it is a good idea to make another question. – Anton Antonov Jul 20 '16 at 10:48

4 Answers4

7

This answer shows how to define a new NIntegrate rule that evaluates f in the list of two integrands {f[x],g[f[x]]+h[x]} only once per sampling point. The answer can be also easily modified into an answer of "NIntegrate over a list of functions".

The definition of the NIntegrate rule LessEvaluationsRule given below is also aimed to be didactic and conceptually simple. The design (options and plug-in mechanism utilization) can be made to be more robust.

How to use

In order to compute the integrals in the standard NIntegrate call:

NIntegrate[{Sin[x], Sqrt[Sin[x]] + x^2}, {x, 0, 1}]
(* {0.459698, 0.976311} *)

we specify the function of one argument Sin and the function of two arguments (Sqrt[#2] + #1^2 &):

NIntegrate[1, {x, 0, 1}, 
 Method -> {"GlobalAdaptive", "SingularityHandler" -> None, 
   Method -> {LessEvaluationsRule, "BaseFunction" -> Sin, 
     "DependantFunction" -> (Sqrt[#2] + #1^2 &)}}, PrecisionGoal -> 6]
(* {0.459698, 0.976311} *)

Note that the rule definition below does not allow the (correct) application of NIntegrate's singularity handling so it is prevented with "SingularityHandler" -> None.

Comparison

Another consequence of specifying the integrated functions as option values is that we cannot use EvaluationMonitor. We can see though that the timing is twice smaller. (Or just examine IRuleEstimate defined below.)

In[214]:= RepeatedTiming[
 NIntegrate[{Sin[x], Sqrt[Sin[x]] + x^2}, {x, 0, 1}, PrecisionGoal -> 4]]

Out[214]= {0.0062, {0.459698, 0.976315}}


In[215]:= RepeatedTiming[
 NIntegrate[1, {x, 0, 1}, 
  Method -> {"GlobalAdaptive", "SingularityHandler" -> None, 
    Method -> {LessEvaluationsRule, "BaseFunction" -> Sin, 
      "DependantFunction" -> (Sqrt[#2] + #1^2 &)}}, PrecisionGoal -> 4]
 ]

Out[215]= {0.0023, {0.459698, 0.976315}}

Definition of LessEvaluationsRule

Clear[LessEvaluationsRule];
Options[LessEvaluationsRule] = {"Method" -> "GaussKronrodRule", 
   "BaseFunction" -> Sin, "DependantFunction" -> (Sqrt[#2] + #1 &)};

LessEvaluationsRuleProperties = Part[Options[LessEvaluationsRule], All, 1];

LessEvaluationsRule /: 
  NIntegrate`InitializeIntegrationRule[LessEvaluationsRule, nfs_, ranges_, 
   ruleOpts_, allOpts_] :=

  Module[{t, methodSpec, baseFunc, depFunc, pos, absc, weights, errweights},

   t = NIntegrate`GetMethodOptionValues[LessEvaluationsRule, 
     LessEvaluationsRuleProperties, ruleOpts];
   If[t === $Failed, Return[$Failed]];
   {methodSpec, baseFunc, depFunc} = t;

   t = NIntegrate`MOptionValue[methodSpec, nfs, ranges, allOpts];

   If[t === $Failed, Return[$Failed]];
   {absc, weights, errweights} = t[[1]];

   LessEvaluationsRule[{absc, weights, errweights}, baseFunc, depFunc]
   ];

Clear[IRuleEstimate]
IRuleEstimate[f_, g_, {a_, b_}, {absc_, weights_, errweights_}] :=
  Block[{fvals, integral1, error1, integral2, error2, xs},
   xs = Rescale[absc, {0, 1}, {a, b}];
   fvals = f /@ xs;
   {integral1, error1} = (b - a) {fvals.weights, fvals.errweights};
   fvals = g @@@ Transpose[{xs, fvals}];
   {integral2, 
     error2} = (b - a) {fvals.weights, fvals.errweights}; {integral1, 
    integral2, Abs[error1], Abs[error2]}];

LessEvaluationsRule[{absc_, weights_, errweights_}, baseFunc_, depFunc_][
   "ApproximateIntegral"[region_]] := 
  Block[{a, b, fvals, f2expr, integral1, integral2, error1, error2},

   {a, b} = region["Boundaries"][[1]];

   (* Integrals calculation*)
   {integral1, integral2, error1, error2} = 
    IRuleEstimate[baseFunc, depFunc, {a, b}, {absc, weights, errweights}];

   (* Assuming 1D integrals *)
   {{integral1, integral2}, 
    Max[error1, error2], 1}
   ];
Anton Antonov
  • 37,787
  • 3
  • 100
  • 178
6

First, NIntegrate[f1[x], {x, xmin, xmax}] usually proceeds by constructing an Experimental`NumericalFunction from the expression for f1[x]. This will circumvent an attempt to memoize f1 in the OP's manner, f1[x_] := f1[x] =.... One can prevent this by memoizing the function with ?NumericQ checks via f2[x_?NumericQ] := f2[x] = .... One thing to consider is that the NumericalFunction constructed in each case is different, and if one of them is more efficient, I would bet it's the first one; see below for evidence of this. Anton Antonov alluded to this in a comment.

Second, as Mr.Wizard has pointed out, NIntegrate on a list of integrands just call NIntegrate on each integrand. To expand further, the reason there is no speed-up is that the first integral samples only 116721 points, whereas the second samples 1063425. The time it takes to do the second integral dwarfs the first. Memoizing the first integral isn't going to help much, even if all the saved values are reused. (The reason for the difference is that the Sqrt creates some singularities, as well as complex values, which makes the second integral more difficult to compute.)

Code for testing the sampling:

ClearAll[g];
g[x_, y_, z_] := g[x, y, z] = Exp[Sin[x]] + Cos[y + z];
{pts1, pts2} = 
   Last@Last@Reap@
       NIntegrate[#, {x, 0, 10}, {y, 0, 10}, {z, 0, 10}, 
         Method -> {"LocalAdaptive", "SymbolicProcessing" -> 0}, 
         PrecisionGoal -> 7, 
         EvaluationMonitor :> Sow[{x, y, z}]] & /@
     {g[x, y, z], Sqrt[g[x, y, z]] + x}; // AbsoluteTiming
(*  {10.162, Null}  *)

ClearAll[g];
g[x_, y_, z_] := g[x, y, z] = Exp[Sin[x]] + Cos[y + z];
ptsall = Last@Last@Reap@
      NIntegrate[{Sqrt[g[x, y, z]] + x, g[x, y, z]},
       {x, 0, 10}, {y, 0, 10}, {z, 0, 10}, 
       Method -> {"LocalAdaptive", "SymbolicProcessing" -> 0}, 
       PrecisionGoal -> 7, 
       EvaluationMonitor :> Sow[{x, y, z}]]; // AbsoluteTiming
(*  {10.1304, Null}  *)

Length /@ {pts1, pts2}
Total@%
(*
  {116721, 1063425}
  1180146
*)

Length@ptsall    (* same as the preceeding Total *)
(*  1180146  *)

Code for estimating how much time could be saved by memoizing ~120K sample values, if it worked as expected (about half a second):

ClearAll[g];
g[x_, y_, z_] := g[x, y, z] = Exp[Sin[x]] + Cos[y + z];
Table[g[x, y, z], {x, 0., 10, 1./8}, {y, 0., 10, 1./4}, {z, 0., 10, 1./4}]
  // Flatten // Length // AbsoluteTiming
Table[Sqrt[g[x, y, z]] + x, {x, 0., 10, 1./8}, {y, 0., 10, 1./4}, {z, 0., 10, 1./4}]
  // Flatten // Length // AbsoluteTiming
(*
  {0.81421, 136161}
  {0.243886, 136161}
*)

Code to compare the Experimental`NumericalFunction of the symbolic integrand g and the ?NumericQ version:

ClearAll[g];
g[x_, y_, z_] := g[x, y, z] = Exp[Sin[x]] + Cos[y + z];
NIntegrate[g[x, y, z], {x, 0, 10}, {y, 0, 10}, {z, 0, 10}, 
   Method -> {"LocalAdaptive", "SymbolicProcessing" -> 0}, 
   PrecisionGoal -> 7, 
   IntegrationMonitor :> ((numfnSymbolic = First[#]["NumericalFunction"]) &)];

ClearAll[g];
g[x_?NumericQ, y_?NumericQ, z_?NumericQ] := g[x, y, z] = Exp[Sin[x]] + Cos[y + z];
NIntegrate[g[x, y, z], {x, 0, 10}, {y, 0, 10}, {z, 0, 10}, 
   Method -> {"LocalAdaptive", "SymbolicProcessing" -> 0}, 
   PrecisionGoal -> 7, 
   IntegrationMonitor :> ((numfnNumeric = First[#]["NumericalFunction"]) &)];

Table[numfnSymbolic[x, y, z], {x, 0., 10, 1./8}, {y, 0., 10, 1./8}, {z, 0., 10, 1./8}]
  // Flatten // Length // AbsoluteTiming
Table[numfnNumeric[x, y, z], {x, 0., 10, 1./8}, {y, 0., 10, 1./8}, {z, 0., 10, 1./8}]
  // Flatten // Length // AbsoluteTiming
(*
  {0.747187, 531441}
  {3.70936, 531441}
*)

Note this suggests that the difference between the OP's g and a truly memoized g with ?NumericQ should be around 6 sec. Well, it is:

ClearAll[g];
g[x_?NumericQ, y_?NumericQ, z_?NumericQ] := g[x, y, z] = Exp[Sin[x]] + Cos[y + z];
NIntegrate[{g[x, y, z], Sqrt[g[x, y, z]] + x}, {x, 0, 10}, {y, 0, 10}, {z, 0, 10}, 
   Method -> {"LocalAdaptive", "SymbolicProcessing" -> 0}, 
   PrecisionGoal -> 7]; // AbsoluteTiming
(*  {16.2901, Null}  *)

One caveat: In the Table[] comparison, I tested just one numerical function (from one integration subregion). I cannot say authoritatively that all numerical functions are the same. (Help, anyone?)

Michael E2
  • 235,386
  • 17
  • 334
  • 747
3

I got around to actually evaluating your code and I realized that g is not remembering its values; DownValues[g] only has a length of three. The "solution" is to restrict the function to numeric values, per The difference between "SymbolicProcessing" -> 0 and restricting the function definition to numeric values only, but doing that actually makes the integration much slower; the memoization simply has too large an overhead:

SetAttributes[numArgsQ, HoldFirst]
numArgsQ[_[___?NumericQ]] := True

mem : h[x_, y_, z_]?numArgsQ := mem = Exp[Sin[x]] + Cos[y + z];

NIntegrate[{h[x, y, z], Sqrt[h[x, y, z]] + x}, {x, 0, 10}, {y, 0, 10}, {z, 0, 10}, 
  Method -> {"LocalAdaptive", "SymbolicProcessing" -> 0}, 
  PrecisionGoal -> 7] // Timing
{9.12606, {1429.54, 6081.95 + 59.0571 I}}

(My timing for f or g is only ~ 4.83 seconds.)

Extensive memoization has taken place as indicated by:

DownValues[h] // Length
1108835

For my use of mem and numArgsQ please see:

Mr.Wizard
  • 271,378
  • 34
  • 587
  • 1,371
-1

Testing my comment does indicate that one can cut the number of evaluations of f[] in half easily.

f[x_, y_, z_] := Module[{},
    totCalls++;
    Exp[Sin[x]] + Cos[y + z]
]

totCalls = 0; 
NIntegrate[
    {f[x, y, z], Sqrt[f[x, y, z]] + x}, 
    {x, 0, 10}, {y, 0, 10}, {z, 0, 10}, 
    Method -> {"LocalAdaptive", "SymbolicProcessing" -> 0}, 
    PrecisionGoal -> 7] // RepeatedTiming
totCalls

(* {7.23, {1429.54, 6081.95 + 59.0571 I}} *)
(* 16 *)

totCalls = 0; 
NIntegrate[
    {#, Sqrt[#] + x} &[f[x, y, z]], 
    {x, 0, 10}, {y, 0, 10}, {z, 0, 10}, 
    Method -> {"LocalAdaptive", "SymbolicProcessing" -> 0}, 
    PrecisionGoal -> 7] // RepeatedTiming
totCalls

(* {7.281, {1429.54, 6081.95 + 59.0571 I}} *)
(* 8 *)

This tells me one very important thing: Your slow numerical evaluation is not caused by calling f[] often. Instrumenting f[] a little more turns up some very odd behaviour.

f[x_, y_, z_] := Module[{},
    totCalls++;
    Print[{x, y, z, Exp[Sin[x]] + Cos[y + z]} // RepeatedTiming];
    Exp[Sin[x]] + Cos[y + z]
]

totCalls = 0; 
NIntegrate[
    {f[x, y, z], Sqrt[f[x, y, z]] + x}, 
    {x, 0, 10}, {y, 0, 10}, {z, 0, 10}, 
    Method -> {"LocalAdaptive", "SymbolicProcessing" -> 0}, 
    PrecisionGoal -> 7] // RepeatedTiming
totCalls

(*  {1.9*10^-6 , {10, 10, 10, E^Sin[10]+Cos[20]}}  *)
(*  {1.35*10^-6, {x , y , z , E^Sin[x]+Cos[y+z]}}  *)
(*  ...  essentially repeat those two lines seven more times  ...  *)
(*  {7.34, {1429.54, 6081.95 + 59.0571 I}}  *)
(*  16  *)

So, adding a bunch of overhead to calls to f[] doesn't much change the total calculation time. Also, the function only ever seems to get evaluated at {x,y,z} = {10,10,10}, which is very odd.

Eric Towers
  • 3,040
  • 12
  • 14
  • @J.M. : First thing that worked? Also, wouldn't have reduced typing. – Eric Towers Jul 14 '16 at 16:18
  • 1
    This is not a good answer. "NIntegrate first localizes the values of all variables, then evaluates f with the variables being symbolic, and then repeatedly evaluates the result numerically." Meaning looking at totCalls is misleading, which can be seen using Reap and the option specification EvaluationMonitor :> Sow[{x, y, z}]. – Anton Antonov Jul 14 '16 at 16:36
  • @AntonAntonov : Except that your quote is clearly false. f[] is clearly evaluated with both symbolic and numeric inputs. f[] is clearly evaluated repeatedly on the same inputs. And my claim, that f[] would be called half as often using the syntax from my comment, is clearly true. – Eric Towers Jul 14 '16 at 16:50
  • 1
    This statement is wrong : "Testing my comment does indicate that one can cut the number of evaluations of f[] in half easily." You can verify that by using totCalls = 0; res = Reap@ NIntegrate[{#, Sqrt[#] + x} &[f[x, y, z]], {x, 0, 10}, {y, 0, 10}, {z, 0, 10}, PrecisionGoal -> 3, EvaluationMonitor :> Sow[{x, y, z}]]; {totCalls,Length[res[[2, 1]]]}. Or just look at the computation timings -- they are the same. – Anton Antonov Jul 14 '16 at 20:47
  • Just to clarify: RepeatedTiming repeats the command several times. One can try foo = 0; (foo++; NIntegrate[..]) // RepeatedTiming and check foo to see how many times (4). – Michael E2 Jul 15 '16 at 17:44
  • "...the function only ever seems to get evaluated at {x,y,z} = {10,10,10}...." Yes, how could it estimate the integral from a single point? In fact, it evaluates the integrand over a million times in the Sqrt[..] case. Before integrating, NIntegrate evaluates the integrand twice, at {10,10,10} and symbolically to construct a numerical function using Exp[Sin[x]] + Cos[y + z], the expression returned by f[x, y, z]. Since f is used only in the preliminary stage, technically you could say the evaluation of f itself is cut in half, from 4 times to 2 times, but that is insignificant. – Michael E2 Jul 15 '16 at 17:55
  • @AntonAntonov : You seem to conflate calling f[] with calling some mangled shadow of f in the internals of NIntegrate[]. I don't. In the context of the Question, I don't have any control of how smartly or stupidly the internals of NIntegrate[] organize common subexpression elision, and neither do you, so focussing on it is ridiculous. – Eric Towers Jul 19 '16 at 22:46
  • @MichaelE2 : Your answer claims ~1 million evaluations of the mangled version of f[]. My answer gives <2 us time per evaluation. Well, that's less than a third of the time of the NIntegrate[] calls. So even reducing the calls to mangled f to zero will not substantially reduce the runtime. Sounds like something else needs to happen to make the OP happy. – Eric Towers Jul 19 '16 at 22:51