I think this question has never been fully answered. In here the problem has already been mentioned, but let me give a MWE.
I will solve a given ODE for a wide range of initial conditions and I want to make this parallel. Moreover, everytime the solution of my ODE is, say, 0.8, I want to save the instant of time in which that happens:
myData = {}
initialConditions = Table[{i, j}, {i, 0., 10, 0.1}, {j, 0., 10, 0.1}];
myEvent = WhenEvent[x[t] == 0.8, AppendTo[myData, t]];
Map[NDSolve[{x''[t] == -x[t], x[0] == #[[1]], x'[0] == #[[2]], myEvent}, x, {t, 0, 10}] &, initialConditions];
And this will work and give the pretendent list.
However, if I use ParallelMap instead of Map it won't work and the output will be
myData={}
Obviously in this MWE the ODE is quite simple and the use of parallelization is not justifiable. However in the case I'm working on, using Map for a list of many initial conditions is taking me about 10 minutes. I know that it is probably due to the AppendTo, so what would be the fastest way of doing this?
DistributeDefintions[myData]first. But in general, usingAppend/AppendTois a bad idea and it in particular, it is an awfully bad to use them withParallelconstructs as it will totally defeat to purpose of parallelization:Appendcan only be executed sequenttially. Better useSowandReapor kernel-local constructs and gather the resuts only in the end. – Henrik Schumacher Jan 20 '20 at 15:31