I have a 10x5xn list, which I'll call data,
data = RandomInteger[{-500, 500}, {10, 5, n}];
and a 5x2 list
coeffs={{0., -0.951057, -0.587785, 0.587785, 0.951057},
{1., 0.309017, -0.809017, -0.809017, 0.309017}}
For each 5xn nestedList inside data, I want to map it using coeffs.nestedList, so that my overall code would look like
newData=Table[coeffs.nestedList,{nestedList,data}]
However, for large n, we start to see a huge difference in the time it takes to run the above line of code and the time it takes to run the constituent computations.
componentTime = Total@Table[First@RepeatedTiming[coeffs.nestedList], {nestedList, data}];
tableTime = First@RepeatedTiming[Table[coeffs.nestedList, {nestedList, data}]];
Consider the difference between componentTime and tableTime as n increases:
I would expect tableTime to be slightly longer than componentTime, but not by that much. Why is there such a large discrepancy between the summed time it takes for the components in Table to execute and the time it takes for the table itself to be constructed? Is there a way to minimize this discrepancy, or a way to formulate the mapping so that it avoids it altogether?

RepeatedTimingoutside ofTablecopies the data each time, does this mean that the extra time intableTimeis just an artifact ofRepeatedTimingand not time that one would actually have to worry about when running the code normally? Or is there an actual performance benefit to unpacking the data in the code? – az123p Aug 22 '22 at 20:37Repeatedtiming is insideTable; only the dot products are repeated. – Michael E2 Aug 22 '22 at 20:48