3

I think this question has never been fully answered. In here the problem has already been mentioned, but let me give a MWE.

I will solve a given ODE for a wide range of initial conditions and I want to make this parallel. Moreover, everytime the solution of my ODE is, say, 0.8, I want to save the instant of time in which that happens:

myData = {}

initialConditions = Table[{i, j}, {i, 0., 10, 0.1}, {j, 0., 10, 0.1}];

myEvent = WhenEvent[x[t] == 0.8, AppendTo[myData, t]];

Map[NDSolve[{x''[t] == -x[t], x[0] == #[[1]], x'[0] == #[[2]], myEvent}, x, {t, 0, 10}] &, initialConditions];

And this will work and give the pretendent list.

However, if I use ParallelMap instead of Map it won't work and the output will be

myData={}

Obviously in this MWE the ODE is quite simple and the use of parallelization is not justifiable. However in the case I'm working on, using Map for a list of many initial conditions is taking me about 10 minutes. I know that it is probably due to the AppendTo, so what would be the fastest way of doing this?

AJHC
  • 369
  • 1
  • 8
  • 4
    Probably you have to run DistributeDefintions[myData] first. But in general, using Append/AppendTo is a bad idea and it in particular, it is an awfully bad to use them with Parallel constructs as it will totally defeat to purpose of parallelization: Append can only be executed sequenttially. Better use Sow and Reap or kernel-local constructs and gather the resuts only in the end. – Henrik Schumacher Jan 20 '20 at 15:31

1 Answers1

3

Here is a working serial version of your problem:

AbsoluteTiming[
  myDataSerial = {};
  initialConditions = 
   Flatten[#, 1] &@
    Table[{i, j}, {i, 0., 3(*10*), 0.1}, {j, 0., 3(*10*), 0.1}];
  myEvent = WhenEvent[x[t] == 0.8, AppendTo[myDataSerial, t]];
  Map[NDSolve[{x''[t] == -x[t], {x[0], x'[0]} == #, myEvent}, 
     x, {t, 0, 10}] &, initialConditions];
  ][[1]]

a parallelized version is (see documentation for more details: https://reference.wolfram.com/language/ParallelTools/tutorial/ParallelEvaluation.html):

AbsoluteTiming[
  initialConditions = 
   Flatten[#, 1] &@
    Table[{i, j}, {i, 0., 3(*10*), 0.1}, {j, 0., 3(*10*), 0.1}];
  f[ic_] := 
   Reap[NDSolve[{x''[t] == -x[t], Thread[{x[0], x'[0]} == ic], 
       WhenEvent[x[t] == 0.8, Sow@t]}, x, {t, 0, 10}]][[2]];
  myDataParallel = Flatten@ParallelMap[f, initialConditions]
  ][[1]]

Timing information on my machine:

serial: 1.66 s
parallel: 0.63 s

You can check the correctness of the result with:

Sort@myDataSerial == Sort@myDataParallel

Another option is to use linked lists as described here: Reap, Sow with Parallelize: bad performance, why?

AbsoluteTiming[
  myDataParallelOpt2 = {};
  initialConditions = 
   Flatten[#, 1] &@
    Table[{i, j}, {i, 0., 3(*10*), 0.1}, {j, 0., 3(*10*), 0.1}];
  ParallelEvaluate[myDataParallelOpt2 = {}];
  ParallelDo[
   NDSolve[{x''[t] == -x[t], {x[0], x'[0]} == ic, 
      WhenEvent[x[t] == 0.8, 
       myDataParallelOpt2 = {myDataParallelOpt2, t}]}, 
     x, {t, 0, 10}];, {ic, initialConditions}];
  myDataParallelOpt2 = 
   Join @@ ParallelEvaluate[Flatten@myDataParallelOpt2];
  ][[1]]

which also takes about 0.63 s

Oscillon
  • 1,231
  • 10
  • 21