2

I read an argument that the second-order coherence remains unaffected by loss. This is true because loss can always be modeled by the action of lossless beam-splitters in series, and it can be verified from the definition of the second-order coherence function that beam-splitters leave the second-order coherence unchanged. However, it is also known that random sampling of a sub-Poissonian light source gradually makes the photon statistics Poissonian.

So my question is the following: If we start with a single-photon source, it will exhibit anti-bunching, i.e, $g^{(2)}(0)<<1$, and also highly sub-Poissonian statistics, i.e, $\langle (\Delta n)^2 \rangle << \langle n \rangle$. However, if we introduce sufficient loss, it seems that the second-order coherence function $g^{(2)}(0)$ remains unaffected, but the statistics will deteriorate towards a Poissonian one, i.e, $\langle (\Delta n)^2 \rangle \approx \langle n \rangle$. How can a Poissonian light source exhibit anti-bunching? How does one visualize this effect? Or am I missing something here?

1 Answers1

1

The problem boils down to the usual ambiguity in defining $0/0$. One defines the second-order coherence as the ratio between the second-order correlation function $\langle a^\dagger a^\dagger a a\rangle$ and the intensity squared $\langle a^\dagger a\rangle^2$: $$g^{(2)}(0)=\frac{\langle a^\dagger a^\dagger a a\rangle }{\langle a^\dagger a\rangle^2}=\frac{\langle n(n-1)\rangle }{\langle n\rangle^2},$$ defining $n=a^\dagger a$ and using the bosonic commutation relations. It is certainly true that this ratio remains the same when $a\to \eta a$ (up to a discarded ancillary mode), but the case where the photon number goes to zero must be dealt with specially, because then both the numerator and denominator vanish. Namely, loss is modeled by $$a\to \eta a+\sqrt{1-\eta^2}b$$ and then tracing over mode $b$, which leads to $$\langle a^\dagger a^\dagger aa \rangle\to \eta^4 \langle a^\dagger a^\dagger aa \rangle;\qquad \langle a^\dagger a \rangle\to \eta^2 \langle a^\dagger a \rangle;$$ therefore $$g^{(2)}(0)\to \frac{\eta^4 \langle a^\dagger a^\dagger aa \rangle}{\eta^4 \langle a^\dagger a \rangle^2}=\frac{\eta^4}{\eta^4}g^{(2)}(0).$$ It is all well and good to cancel $\eta/\eta=1$ when $\eta\neq 0$, but the case of complete loss ($\eta=0$) cannot use this cancelation and so one must choose a priori how to define the second-order coherence for the vacuum state.

So far, this answers the question about the limit, which seems to just have a removable singularity. What about how bunchedness and Poissonianity (I might have made those words up) as $\eta$ approaches $0$? First we note that the state becomes closer and closer to the vacuum as $\eta$ gets smaller and smaller, so we are okay considering the state to be more and more Poissonian because the vacuum is Poissonian. Next, we inspect Poissonianity by using the Mandel Q parameter $$Q=\frac{\mathrm{Var}(n)-\langle n\rangle}{\langle n\rangle}=\langle n\rangle[g^{(2)}(0)-1].$$ Poissonian states have $\mathrm{Var}(n)=\langle n\rangle$ and thus $Q=0$, bunched states have sub-Poissonian statistics and thus $Q<0$, and so on. The point is that even the amount of bunchedness should be compared to something, so $Q$ appears with a normalization by $\langle n\rangle$, which changes with loss even if the variance and mean stayed the same. Actually $Q$ always shrinks closer to $0$ with loss, going as $Q\to \eta^2 Q$ because $\langle n\rangle\to \eta^2\langle n\rangle$ and $g^{(2)}(0) \to g^{(2)}(0)$, so we can see that both bunched (sub-Poissonian) states and anti-bunched (super-Poissonian) states tend toward perfectly Poissonian states (ie the vacuum) but never cross into the opposite territory because the sign of $Q$ cannot change with loss.

In other words, $$\mathrm{Var}(n)-\langle n\rangle=\langle n\rangle^2[g^{(2)}(0)+1].$$ Even when the second-order degree of coherence $g^{(2)}(0)$ remains unchanged by photon loss $0<\eta<1$, the amount of bunchedness or antibunchedness, given by the difference between the variance and the mean $\mathrm{Var}(n)-\langle n\rangle$, will still shrink to zero due to the $\langle n\rangle^2$ factor. Different normalizations, different tendencies with loss.