0

I'm trying to test parallel matrix multiplication with commands:

$ConfiguredKernels
CloseKernels[];
LaunchKernels[1]
m1 = RandomReal[{-10, 10}, {2000, 2000}];
mykernel = First[Kernels[]];
startT1 = AbsoluteTime[];
ParallelEvaluate[a1 = m1.m1, mykernel];
runtime1 = AbsoluteTime[] - startT1;
CloseKernels[];
LaunchKernels[2]
mykernels = Kernels[];
startT2 = AbsoluteTime[];
ParallelEvaluate[a2 = m1.m1, mykernels];
runtime2 = AbsoluteTime[] - startT2;
Print[runtime1, " ", runtime2]

And output is:

{<<2 local kernels>>}
{KernelObject[97,local]}
{KernelObject[98,local],KernelObject[99,local]}
1.1577824 1.1675535

I expect that the first number is 2 times greater than the second, however I ran this script many times and these numbers are always about the same. Where is my mistake?

r1d1
  • 111
  • 3
  • I would say that Dot gets parallelized quite well by the Intel MKL. So, there is no point in trying highlevel optimization, at least not with only 2 cores... – Henrik Schumacher Dec 03 '17 at 19:53
  • 3
    Read the documentation of ParallelEvaluate carefully. It evaluates the same thing on each kernel. It won't speed up anything. It isn't meant to. You are confusing it with Parallelize, but that won't speed up matrix multiplication either. Mathematica's parallel tools use the distributed memory paradigm and transferring the data would take longer than the multiplication. However, as Henrik said, Dot is already paralellized internally, so there's no need for you to try to do this. – Szabolcs Dec 03 '17 at 20:24
  • But how can I perform matrix multiplication forcibly on one core, then on two cores and compare its times? Or for procedure Inverse[]? – r1d1 Dec 03 '17 at 20:45
  • To restrict Mathematica to use only a single core, see https://mathematica.stackexchange.com/a/31401/12 – Szabolcs Dec 04 '17 at 10:09

0 Answers0