5

I'm using ParallelTable[] to calculate a function over a range of my parameters , ($\omega,\ell$). This seems to be working well (in terms of speed increase) except for some strange Precision issues.

I set the $MinPrecision=40 at the start of the notebook as well as my working precision, precision goal and accuracy goal for some computations later in the notebook as

 wp=$MinPrecision;
 ac=$MinPrecision-8;
 pg=wp/2;

The rest of my code basically defines some helper functions before going on to the main function (that I feed into ParallelTable). This main function uses the helper functions along with NDSolve and NIntegrate to do some computations, and it's in these that I feed the Working Precision->wp ,AccuracyGoal->ac, PrecisionGoal->pg.

I have used DistributeDefinitions on all my variables, helper functions, and the main function, and even on $MinPrecision, also all my initial conditions for NDSolve have N[...,wp] wrapped around them, but still when I use ParallelTable[...] I get errors:

 Precision::precsm. Requested precision 39.153977328439204` is smaller than $MinPrecision. 

I don't get any such error when I simply use Do[...] or Table[..] on my main function, and indeed the results differ at the 33rd decimal place, which is the decimal the above number is precision too. I have no idea what this number is unfortunately, it doesn't look like any of my outputs or inputs.

I just don't understand how this could not crop up with the non-parallelized forms, but crop up with parallelized?

Minimal Example

This is much simpler than my code, but I think it still captures it and the problem:

Definitions:

M = 1;
$MinPrecision = 40;
    wp = $MinPrecision;
ac = $MinPrecision - 8;
pg = wp/2;
rinf = 15000;

The main function to be parallelized:

dGenBessE[\[Omega]_?NumericQ, l_?IntegerQ] := 
Block[{\[CapitalPhi]out, init, dinit},

init = 0.00006630728036817007679447778124486601253323 + 
6.913102762021489976135937610105907096265*10^-6 I;
dinit = -6.958226432502243329110910813935678705519*10^-7 + 
6.631148430876236565520382557577187147081*10^-6 I;

  \[CapitalPhi]out[\[Omega], l] = \[CapitalPhi] /. 
  Block[{$MaxExtraPrecision = 100}, 
      NDSolve[{\[CapitalPhi]''[r] + (2 (r - M))/(
           r (r - 2 M)) \[CapitalPhi]'[
            r] + ((\[Omega]^2 r^2)/(r - 2 M)^2 - (l (l + 1))/(
             r (r - 2 M))) \[CapitalPhi][r] == 
         0, \[CapitalPhi][rinf] == 
         N[init, wp], \[CapitalPhi]'[rinf] == 
         N[dinit, wp]}, \[CapitalPhi], {r, 30, 40}, 
       WorkingPrecision -> wp, AccuracyGoal -> ac, 
       PrecisionGoal -> pg, MaxSteps -> \[Infinity]]][[1]];
     Print["For \[Omega]=", \[Omega], " and l=", l, ": "];
     Print["Kernel ID: ", $KernelID];
 Print["Precision of init: ", Precision[init]];
 Print["Precision of dinit: ", Precision[dinit]];
 Print["\[CapitalPhi]out at 35=" , 
N[\[CapitalPhi]out[\[Omega], l][35], wp] ];
 Print["Precision of \[CapitalPhi]out: ", 
 Precision[\[CapitalPhi]out[\[Omega], l][35]]];
]

Do Paralleization prereqs:

LaunchKernels[4]
DistributeDefinitions[M, rinf, wp, ac, pg, $MinPrecision,dGenBessE];

Attempt to run it with Paralleize for two different values:

Parallelize[{dGenBessE[1/10, 0], dGenBessE[1/10, 1]}] // AbsoluteTiming

Result

For me this leads to a

Precision::precsm: Requested precision 38.95475956978393` is smaller than   $MinPrecision. Using $MinPrecision instead.
Ajasja
  • 13,634
  • 2
  • 46
  • 104
fpghost
  • 2,125
  • 2
  • 21
  • 25
  • Can you give a small example? I cannot reproduce your behavior. – halirutan Oct 26 '12 at 09:02
  • OK, will try and strip down my code to essentials that reproduce then repost soon. – fpghost Oct 26 '12 at 09:07
  • 1
    you can run ParallelEvaluate[<your variable>] to see if they were distributed OK. – Ajasja Oct 26 '12 at 09:52
  • @halirutan now put a minimal example which reproduces behaviour. @Ajasja, thanks that's a useful feature. All my variables seem OK across the four Kernels, although if I run ParallelEvaluate[$MinPrecision] I get {0,0,0,0} but that isn't really a variable so...If I run ParallelEvaluate[dGenBess] then I get {dGenBess,dGenBess,dGenBess,dGenBess} rather than function definitions, not sure if that is expected or not. – fpghost Oct 26 '12 at 10:05
  • @fpghost ParallelEvaluate[Definition@dGenBess] is more useful (watch the spelling, sometimes you have dGenBess and sometimes dGenBessE) – Ajasja Oct 26 '12 at 11:19

1 Answers1

7

I think your problem arises because the value of $MinPrecision is not distributed correctly (If I remember correctly none of the variables in the System context are distributed automatically).

So we have to do this by hand

ParallelEvaluate[$MinPrecision = 40]

Parallelize[{dGenBessE[1/10, 0], dGenBessE[1/10, 1]}] // AbsoluteTiming

Should now work without problems.

Also, presonally I would write the last example as

ParallelMap[dGenBessE[1/10, #]&,{0,1}] // AbsoluteTiming

or

Parallelize@Scan[dGenBessE[1/10, #] &, {0, 1}]

depending on whether you need to return values or not.

Ajasja
  • 13,634
  • 2
  • 46
  • 104
  • Excellent, that did the trick, thanks. I'll keep System variables distribution in mind in future. – fpghost Oct 26 '12 at 11:50
  • I don't suppose you know a clever way to get the AbsoluteTiming of each element of ParallelTable? – fpghost Oct 26 '12 at 12:50
  • @fpghost What about ParallelMap[AbsoluteTiming[dGenBessE[1/10, #]] &, {0, 1}]? – Ajasja Oct 26 '12 at 14:10
  • Strange that works when I replace my dGenBessE by a simple function like g[x_]:=x^2 then do ParallelMap[AbsoluteTiming[g[#]] &, {0, 1}] say, but not for my actual function. When in the same notebook ParallelTable[dGenBess[...] is executing fine. I should note that my Table will be iterating over two vairables and that 1/10 fixed in the above, will be looped over too. So I don't know if Map will be good to get the timing at each point in the 2d grid? Although this isn't the problem with your timing method as I copied what you put verbatim keeping the 1/10 fixed.. – fpghost Oct 26 '12 at 14:48
  • This appears to have actually worked. Congratulations. – Carl Sep 05 '20 at 05:05