Given a unitary matrix U and a list of matrices Mlist how can I apply the unitary transformation to the list?
-
2By mapping an appropriate function? Look up ‘Map’ as a start. You should also provide an example of these matrices and list. I notice that you asked a question along the same lines before, regarding applying ‘Complement’ to a list of matrices. Perhaps some of the techniques shown in the answers there provide a starting point as well. – MarcoB May 05 '18 at 04:27
3 Answers
This is a sample example that uses Map. I have defined a function to generate any $2\times2$ $SU(2)$ transformation upto an overall phase and used one such transformation to change the list of matrices.
matrices = RandomComplex[{-1 - I, 1 + I}, {10, 2, 2}];
unitary[θ_, ϕ1_, ϕ2_] := {
{E^(I ϕ1) Cos[θ], E^(I ϕ2) Sin[θ]},
{-E^(I ϕ2) Sin[θ], E^(- I ϕ1) Cos[θ]}
};
(ConjugateTranspose[
unitary[π/4, π/5, π/6]].#.unitary[π/4, π/ 5, π/6]) & /@ matrices;
- 106,770
- 7
- 179
- 309
- 1,534
- 1
- 8
- 18
Matrix operations automatically use parallelization when possible, so using matrix operations and avoiding Map will provide a speed gain. Using @Henrik's example:
a = Map[ConjugateTranspose[U].#.U&, matrices]; //RepeatedTiming
b = cf[matrices,ConjugateTranspose[U],U]; //RepeatedTiming
c = Transpose[ConjugateTranspose[U] . Transpose[matrices] . U]; //RepeatedTiming
Block[{Internal`$EqualTolerance=5}, a==b==c]
{0.0086, Null}
{0.0024, Null}
{0.0016, Null}
True
So, using Dot and Transpose is faster than @Henrik's compiled version.
Note that I used 10^4 matrices instead of 10^5. As the number of matrices increases, the compiled version eventually becomes faster.
- 130,679
- 6
- 243
- 355
-
1Very interesting. Must be very machine dependend. On my old Haswell, method c takes 1.5 to 2 times as long as the compiled version. Might have to do with slower RAM (@1600 MHz)? This is why I got the habit to leave MTensors in their natural ordering. – Henrik Schumacher May 05 '18 at 17:11
-
2@HenrikSchumacher I also get similar slower results for
con a 2013 MacBook Pro – Michael E2 May 05 '18 at 17:36
If you have to do that really often and with numerical matrices, it may be worth the effort to write a CompiledFunction with RuntimeAttributes -> {Listable}. This is usually faster than using Map, since it can also utilize parallelization.
cf = Compile[{{A, _Complex, 2}, {U, _Complex, 2}, {V, _Complex, 2}},
U.A.V,
RuntimeAttributes -> {Listable},
Parallelization -> True
];
n = 4;
matrices = RandomComplex[{-1 - I, 1 + I}, {100000, n, n}];
U = RandomVariate[CircularUnitaryMatrixDistribution[n]];
a = Map[ConjugateTranspose[U].#.U &, matrices]; // RepeatedTiming // First
b = cf[matrices, ConjugateTranspose[U], U]; // RepeatedTiming // First
Max[Abs[a - b]]
0.077
0.021
0.
- 106,770
- 7
- 179
- 309
-
-
1Yeah, it's great, isn't it? I experienced all the matrix distributions for myself only recently. – Henrik Schumacher May 05 '18 at 09:34
-
1
-
2@Subho95 The reason
Mapis so slow here is that the function defined in terms offis uncompilable. The consequent unpacking ofmatricesalso contributes to the slowness.Map[ConjugateTranspose[U].#.U &, matrices]is about 4-5 times faster and does not unpackmatrices. The compiledcfand the difference with the compiled is another 4 times faster, which speedup is explained by parallelization (on my quad-core i7). – Michael E2 May 05 '18 at 12:29 -
-