This is the most common pattern to compute a table of results:
Table[function[p], {p, parameters}]
(regardless of how it's implemented, it could be a Map)
The problem with this is that if the calculation is interrupted before it's finished, the partial results will be lost.
We can do this in a safely interruptible way like so:
Do[AppendTo[results, {p, function[p]}], {p, parameters}]
If this calculation is interrupted before it's finished, the intermediate results are still preserved. We can easily restart the calculation later, for those parameter values only for which function[] hasn't been run yet.
Question: What is the best way to achieve this when running calculations in parallel?
Assume that function[] is expensive to calculate and that the calculation time may be different for different parameter values. The parallel jobs must be submitted in a way to make best use of the CPU. The result collection must not be shared between the parallel kernels as it may be a very large variable (i.e. I don't want as many copies of it in memory as there are kernels)
Motivation: I need this because I want to be able to make my calculations time constrained. I want to run the function for as many values as possible during the night. In the morning I want to stop it and see what I got, and decide whether to continue or not.
Notes:
I'm sure people will mention that AppendTo is inefficient and is best avoided in a loop. I think this is not an issue here (considering that the calculations run on the subkernels and function[] is expensive). It was just the simplest way to illustrate the problem. There could be other ways to collect results, e.g. using a linked list, and flattening it out later. Sow/Reap is not applicable here because they don't make it possible to interrupt the calculation.
About the long running time: The most expensive part of the calculations I'm running are in C++ and called through LibraryLink, but they still take a very long time to finish.

functionfor each value ofp, I can always correct the order later if it necessary (but in my practical problem it probably won't be) – Szabolcs Jan 20 '12 at 08:49ParallelSowwill always runSowon the master kernel if it is set as a shared function. It is not obvious from looking at the documentation ofSetSharedFunction. Do shared functions always get evaluated on the master kernel, even when called from a parallel kernel? – Szabolcs Jan 20 '12 at 17:28SetSharedVariableact the same way? I objected to the other answer here because I thought that after doingSetSharedVariable[var], each kernel would have a separate copy ofvar(meaning$KernelCount + 1copies in memory). Is this really the case? If not, doesvarget transferred to a parallel kernel at every access, then transferred back (meaning temporary duplication ofvar)? It's important to underatnd this to be able to optimize memory usage and performance. – Szabolcs Jan 20 '12 at 23:58ReapandParallelSowredundant in the last sample? I thinkCatchhas only caught the output ofParallelTablehere. – xzczd Oct 25 '13 at 02:49CheckAbort… thanks for your explanation! – xzczd Oct 26 '13 at 05:14