Possible Duplicate:
Information conservation during quantum measurement
I asked a version of the following quesiton previously on Physics.stackexchange, where it didn't get a lot of attention. I thought it might be worth asking it here in case this community has anything to add in the few days before the site is shut down. I hope it's OK to do that.
I'm interested in $\psi$-epistemic interpretations of quantum mechanics, which see the wavefunction (or density matrix) of a system as representing our limited knowledge about some underlying physical state, rather than the physical state itself. The collapse of the wavefunction then becomes very closely analogous to the "collapse" of a classical probability distribution when some new information becomes available.
But there seems to a slight problem, in that quantum measurement does't seem to obey the conservation of (microscopic) information in quite the way it should. If we don't measure a system then its unitary dynamics conserve von Neumann entropy, just as Hamiltonian evolution of a classical system conserves Shannon entropy. In the classical case this can be interpreted as "no process can create or destroy information on the microscopic level." But if I take a spin-$\frac{1}{2}$ particle and make a $\sigma_x$ measurement (measure the spin in the $x$ direction), then make a $\sigma_y$ measurement, then another $\sigma_x$ one, then $\sigma_y$, and so on for $n$ measurements, then I will end up with a string of completely random bits, of length $n$. These bits of information seem to have appeared from nowhere.
It's clear that there are some interpretations for which this isn't a problem. In particular, for the Everett interpretation the Universe just ends up in a superposition of $2^n$ states, each containing an observer looking at a different output string. But what I'm really interested in is whether this "extra" information can be explained in a $\psi$-epistemic interpretation.
further details
The following is an expanded version of the above, in an attempt to make it clearer. I want to start by talking about the classical case, because only then can I make it clear where the analogy seems to break down. Let's consider a classical system that can take on one of $n$ discrete states (microstates). Since I don't initially know which state the system is in, I model the system with a probability distribution.
The system evolves over time. We model this by taking the vector $p$ of probabilities and multiplying it by a matrix T at each time step, i.e. $p_{t+1} = Tp_t$. The discrete analogue of Hamiltonian dynamics turns out to be the assumption that $T$ is a permutation matrix, i.e. it has exacly one 1 on each rown and column, and all its other entries are 0. (Note that permutation matrices are a subset of unitary matrices.) It turns out that, under this assumption, the Gibbs entropy (aka Shannon entropy) $H(p)$ does not change over time.
(It's also worth mentioning, as an aside, that instead of representing $p$ as a vector, I could choose to represent it as a diagonal matrix $P$, with $P_{ii}=p_i$. It then looks a lot like the density matrix formalism, with $P$ playing the role of $\rho$ and $T$ being equivalent to unitary evolution.)
Now let's say I make a measurement of the system. We'll assume that I don't disturb the system when I do this. For example, let's say the system has two states, and that initially I have no idea which of them the system is in, so $p=(\frac{1}{2},\frac{1}{2})$. After my measurement I know what state the system is in, so $p$ will become either $(1,0)$ or $(0,1)$ with equal probability. I have gained one bit of information about the system, and $H(p)$ has reduced by one bit. In the classical case these will always be equal, unless the system interacts with some other system whose state I don't precisely know (such as, for example, a heat bath).
If entropy just represents a lack of information about a system then of course it should decrease when we get some information. But there's a problem when we try to interpret the von Neumann entropy in the same way. In the experiment described in the first part of this question, I'm getting one bit of information with every measurement, but the von Neumann entropy is remaining constant (at zero) instead of decreasing by one bit each time. In the classical case, "the total information I have gained about the system" + "uncertainty I have about the system" is constant, whereas in the quantum case it can increase. This is disturbing, and I suppose what I really want to know is whether there has been anything written about $\psi$-epistemic interpretations in which this "extra" information is accounted for somehow (e.g. perhaps it could come from thermal degrees of freedom in the measuring apparatus), or in which it can be shown that something other than the von Neumann entropy plays a role analogous to the Gibbs entropy.