Say I want to efficiently evaluate $\sum_{kl}A_{ikjl}B_{kl}$ where $A$, $B$ are numerical tensors. This has been discussed before but with no focus on efficiency. A straightforward way as mentioned there would be
TensorContract[TensorProduct[A, B], {{2, 5}, {4, 6}}]
but this is extremely inefficient both in terms of memory and time.
I came up with
TensorProdContract =
Function[{A, B, dims},
Transpose[#, RotateRight@Range@ArrayDepth[#]] &@
Flatten[A, List@Transpose[dims][[1, All]]].Flatten[B,
List@Transpose[dims][[2, All]]]];
which flattens the contracted dimensions and rearranges the arrays so that the fast Dot function can be used. It would be called like
TensorProdContract[A, B, {{2, 1}, {4, 2}}]
But even so, for all dimensions sized at 50, this takes 0.15 seconds on my computer, whereas the tprod library for Matlab (basically compiled code) can do the same thing in 10 times shorter time (called as tprod(A, [1 -1 2 -2], B, [-1 -2])). For size 100, it's 3.5 seconds vs. 0.15 seconds. Given that this should work for arrays of general rank, the Compile approach cannot be really used as far as I know.
Any suggestions?
Flatten/Transposeof the 4-D array. – jhrmnn Mar 17 '15 at 21:11TensorContract/TensorProductapproach. – jhrmnn Mar 18 '15 at 12:16Transposemethod? It appears to be quite fast. Please include code to generate sample data and show that the available methods are still slow. – Mr.Wizard Mar 19 '15 at 10:56