Consider the codes below:
dim = 500;
rd = RandomReal[1, {dim, dim}];
W = Table[{rd[[i]]}\[Transpose].{rd[[i]]}, {i, 1, dim}];
A = RandomReal[1, {dim, dim}];
x = Table[A.W[[i]], {i, 1, dim}]; // AbsoluteTiming
y = Table[A.{rd[[i]]}\[Transpose].{rd[[i]]}, {i, 1, dim}]; // AbsoluteTiming
W[[i]]is equal to {rd[[i]]}\[Transpose].{rd[[i]]}so basically the x multiplication is the same as y one. However y is around 3 times faster than x. Does anybody know why y is faster? I though because I had stored matrix W[[i]] then multiplication x would be faster, specially that y needs to compute W[[i]]first but x has it already.
It seems if we have a matrix in this form W = a.b in which a is a column matrix and b is a row matrix then W.a.b is faster than W.W:
In[1]:= dim = 2000;
In[2]:= a = RandomReal[1, {dim, 1}];
In[3]:= b = RandomReal[1, {1, dim}];
In[4]:= W = a.b;
In[5]:= W.(a.b);(*n^2+n^3 operations*)// AbsoluteTiming
Out[5]= {0.229023, Null}
In[6]:= W.W;(*n^3 operations*)// AbsoluteTiming
Out[6]= {0.183018, Null}
In[7]:= (W.a).b;(*2 n^2 operations*)// AbsoluteTiming
Out[7]= {0.021002, Null}
Does anybody know if it is possible to decompose every arbitrary matrix like this: W =a.b?
Edit
I added the number of multiplication operations for each calculation after reading the answer of bill s. Now, its clear why the last one is the fastest. So, the order by which we do the multiplication matters, at least sometimes.
z1is a matrix matrix product and z2 is a matrix vector and then a vector vector product. The number of arithmetic calculations must be the same. – MOON Nov 10 '14 at 19:39Wandawe have a column matrix, its multiplication withb, a row matrix, is another matrix. – MOON Nov 11 '14 at 11:20