In a question about improving performance of a TensorContract of a TensorProduct, user jose suggested replacing
TensorContract[TensorProduct[A, B], {{2, 5}, {4, 6}}]
with
Activate @ TensorContract[Inactive[TensorProduct][A, B], {{2, 5}, {4, 6}}]
where e.g. for a minimal example:
n = 10;
A = RandomReal[1, {n, n, n, n}];
B = RandomReal[1, {n, n}];
Using this example, the version with Inactivate/Activate is about 30 times faster (according to RepeatedTiming) and uses about 200 times less memory (according to MaxMemoryUsed).
Obviously Mathematica uses some special algorithm to contract the two tensors without actually constructing the whole tensor product.
Now my questions is:
When does using Activate and Inactivate improve performance as drastically as above?
How could I've known that using Activate and Inactivate lets Mathematica use more powerful tools?
I'm looking for examples, heuristics or hidden Documentation.
Activate@Length@{Inactive[Pause][5]}– Kuba Sep 08 '16 at 09:56Activate@Part[Inactive[RandomReal][{0,1},10^9],1;;10]where I would've expected only the first 10 elements to be generated. – AndreasP Sep 08 '16 at 10:03Part[Inactive[RandomReal][{0, 1}, 10^9], 1 ;; 10]doesn't make sense whileLength@{Inactive[Pause][5]}does. – Kuba Sep 08 '16 at 10:15Part[Inactive[RandomReal[{0,1},10^9],1;;10]create the "promise of a list" and then just take the first 10 elements and actually evaluate them, similar to lazy behaviour in e.g. haskell. If you have a moment I would appreciate a short answer with a few examples if you can think of any. – AndreasP Sep 09 '16 at 07:39