(Disclaimer: I realize that the addition of matrices a and b below can be done very fast just by typing a+b. The code below is just meant to illustrate behavior similar to what I observe in much more complicated code, too lengthy to reproduce here).
Suppose I want to add matrices a and b
n = 50;
I can do it via a double loop in a single kernel, this takes 0.02 seconds:
a = RandomReal[{}, {n, n}];
b = RandomReal[{}, {n, n}];
c1 = RandomReal[{}, {n, n}];
Do[
Do[
c1[[i, j]] = b[[i, j]] + a[[i, j]],
{j, 1, n}],
{i, 1, n}] // AbsoluteTiming
Out[363]= {0.0240014, Null}
I can try to parallelize the outer loop -- I use 4 kernels -- this takes 25 seconds -- ouch -- presumably due to the fact that the master kernel is in constant communication with the various kernels, updating c2.
SetSharedVariable[c2];
c2 = RandomReal[{}, {n, n}];
ParallelDo[
Do[
c2[[i, j]] = b[[i, j]] + a[[i, j]],
{j, 1, n}],
{i, 1, n}] // AbsoluteTiming
Out[366]= {25.8984813, Null}
I can try to reduce communication to an absolute minimum, by introducing a temporary array on each kernel. Surprisingly, this still requires 0.3 seconds.
SetSharedVariable[c3];
UnsetShared[vtemp];
c3 = RandomReal[{}, {n, n}];
ParallelDo[
vtemp = Table[0., {n}];
Do[
vtemp[[j]] = b[[i, j]] + a[[i, j]],
{j, 1, n}];
c3[[i]] = vtemp,
{i, 1, n}, Method -> "CoarsestGrained"] // AbsoluteTiming
Out[370]= {0.2940168, Null}
Again, my own code is far more complicated. But it shares with this example the need to distribute the task of updating a piece of a matrix. Updating the global variable/matrix invariably takes more time than executing the code on a single processor, even when the calculations on each processor are far more complex that in the above. Are there techniques to mitigate this problem?
Set, or any other functions with side effects? Please see here: http://mathematica.stackexchange.com/a/1771/12 Since your problem is parallelizable, perhaps it can be formulated in a way that Mathematica canParallelizeautomatically. Note thatParallelizewill handle a lot more thanMap,Do&Table. It can also deal with e.g.Inner,Outer,MapThread, etc. (E.g. this matrix addition is easily reformulated in a functional way using MapThread) – Szabolcs Feb 16 '12 at 16:50MapThread[Plus, {a, b}]; // AbsoluteTimingis much faster thanParallelize[MapThread[Plus, {a, b}]]; // AbsoluteTiming(regardless of theMethodsetting), so my advice from the previous comment is not very good ... – Szabolcs Feb 16 '12 at 16:57