I'm currently computing nested for loops.
I count up a few counting variables and compute a value for each set of counting variables, then that value is added to an array. It is a way of adding up the contributions of each of these counting variables.
It is something like this simplified code:
For[h = 1, h < hmax, h++, count1 = ((200/(hmax))*h);
For[j = 1, j < jmax, j++, count2 = ((100/(jmax))*j) ;
Value = ((user_defined_function1[count1-count2])^(2))*user_defined_function2[count2];
IArray =ReplacePart[IArray, index -> (IArray[[index]] + Value) ]
]]
The issue is that as my calculation has gotten more precise, I have found that I need many more counting variables. My new code has 6 nested for loops, instead of the 2 shown in this example. The code is taking too long.
I want to parallelize the calculation, since I have access to a machine with many slow cpu cores.
My first attempt will be replacing the For[] with Paralleldo[]. I read that Paralleldo[] has some complications if you call functions inside the Paralleldo[]. I don't think I understood the complication completely, so I am asking if there will be a problem if I proceed with replacing the for loops with Paralleldo[].
I think I can also reformulate the problem using ParallelTable[], but then the result will be in a different format and I will have to figure out how to extract the meaningful result. This extra work would not be ideal..
Can I use Paralleldo[] for this application? Are there problems? Are there better alternatives?
Thanks!
ParallelDo. In the best case, your program will be much slower than what would have been possible. In the worst case, it will give you bad results (due to misunsderstandings about how parallelization works) and you'll waste a lot of time debugging it before you realize thatParallelTablewas the way to go from the get go. – Szabolcs Apr 16 '19 at 17:03SetSharedVariable, but it comes with compromises. Every access will require a callback to the main kernel. If your "user defined functions" are very slow, then this might not be a big drawback. If not, then this will kill the performance. – Szabolcs Apr 16 '19 at 20:41index -> Valuepairs from theTable, then combining them at the very end. You could useGroupByto collect values for identical indices, sum them up, then use a single call to ReplacePart (or better, useSparseArray) – Szabolcs Apr 16 '19 at 20:44