6

If I have understood correctly, averaging $N$ noisy independent segments or signals increases the signal-to-noise $\sqrt{N}$-fold.

How does one derive this result?

Marcus Müller
  • 30,525
  • 4
  • 34
  • 58
ave
  • 69
  • 1
  • 2
  • The noiseless part of the signal needs to be the same for all segments/signals for that to be the case. If both the noise and the signal are independent between the signals/segments then you get no such signal-to-noise ratio (SNR) improvement. – Olli Niemitalo Oct 11 '15 at 15:43

3 Answers3

6

I will show how to calculate the SNR for the case of $N=2$ measurements; it is easy to extend the result to general $N$. Assume a signal $s(t)$ has power $S$, and the noise $n(t)$ has variance $\sigma^2$ and zero mean. Then, the signal $s(t)+n(t)$ has SNR equal to $S/\sigma^2$.

Now assume you observe $s(t)$ twice, each time with different, uncorrelated noise, $n_1(t)$ and $n_2(t)$, each with zero mean and variance $\sigma^2$. You average the two observations to get $$\frac{2s(t)+n_1(t)+n_2(t)}2 = s(t)+\frac{n_1(t)}2+\frac{n_2(t)}2.$$

The variance of both $n_1(t)/2$ and $n_2(t)/2$ is $\sigma^2/4$, so the total noise variance is $\sigma^2/2$ and the SNR is $2S/\sigma^2$, for an improvement of 2.

For some reason I don't understand, the wikipedia page on signal averaging defines the SNR as $S/\sigma$. Under that definition, the improvement will indeed be $\sqrt{N}$.

MBaz
  • 15,314
  • 9
  • 30
  • 44
  • 1
    You also need to assume that $n_1$ and $n_2$ are independent or, at least, uncorrelated. Otherwise $E[n_1(t)n_2(t)]$ will be non-zero and contribute to the overall noise in the averaged signal. – Peter K. Oct 11 '15 at 20:44
  • Standard deviation versus variance: note that your form would not scale correctly. Like in physics we have "symmetries" in our expectations. In this case if we amplify a reading by x we expect that what we call signal-to-noise to remain the same. $\frac{S}{\sigma}$ is and $\frac{S}{\sigma^2}$ isn't. – rrogers Oct 14 '15 at 13:49
  • 1
    @rrogers The way I see it, if $s(t)$ has average power $S$ and $n(t)$ has variance $\sigma^2$ (so the SNR is $S/\sigma^2$), then the power of $as(t)$ is $a^2S$, and the variance of $an(t)$ is $a^2\sigma^2$, so the SNR is still $S/\sigma^2$. – MBaz Oct 14 '15 at 17:51
  • @MBaz agreeing with you, Wikipedia's Signal Averaging really uses a non-canonical, and even more importantly, non-useful definition of SNR. – Marcus Müller Oct 15 '15 at 08:33
  • @MBaz Corrected Signal Averaging. Would you mind proofreading? – Marcus Müller Oct 15 '15 at 09:46
  • @MarcusMüller Thanks, I took a quick look, seems good to me. – MBaz Oct 15 '15 at 15:27
  • @MBaz I see now; I didn't interpret your question correctly. I do think that your power S should have been written S^2 to indicate power instead of signal. But my usage is commonly used in instrumentation and indicates reading amplitude vs. noise amplitude; whereas your usage is the ratio of signal power to noise power. Normally I would think that readings would be averaged/added in processing; not power; but maybe your application is different. The optimal way to combine contaminated readings is weighting: S (reading)/ $\sigma^{2}$ variance on a per reading. – rrogers Oct 16 '15 at 13:38
  • @rrogers I work in digital communications, where the SNR is defined as a power ratio. I didn't know before that other fields use different definitions. It's certainly interesting. – MBaz Oct 16 '15 at 18:16
  • @MBaz If your like I will expand my answer below. I think we have different view points on similar problems. I (of course) presumed that everybody had my viewpoint:) My viewpoint is: say you have 5 readings r_1..r_5, each reading consists of two unknowns; a voltage v_1...v_5 proportional to the "true" signal S and noise n_1..n_5. You want to estimate the true signal by combining the readings. The problem is twofold : how to combine the readings to optimize the final S/N. and how close am I to the best I can do. I don't have enough "room in the margin", i.e. comment, to expand here. – rrogers Oct 18 '15 at 12:08
  • @MBaz I think I can summarize my viewpoint. Take two readings of a voltage V; then averaging produces a result. In averaging you take r_1, r_2 and add. Now in this case a contaminating noise that is uncorrelated reading to reading does not add. In fact for Gaussian noise the variances add. But the square/power of the signal would be (V+V)^2 = 4V and the noise power sum of variance would also be 2(sd)^2 . But back in the "real" domain we would have V and sqrt(2sd^2). There are a lot subtle's left out of that; but that is the idea. – rrogers Oct 18 '15 at 15:37
  • 1
    As far as I'm concerned the standard deviation definition (as opposed to variance) is incorrect, because it's not power. I believe also that the square of the mean definition only applies if there's zero variance. Good answer though – Lewis Kelsey Nov 16 '21 at 14:44
  • Dilip Sarwate defined it using standard deviation however, here is his comment with regards to that https://dsp.stackexchange.com/questions/9094/understanding-the-matched-filter/9389#comment70320_9389 – Lewis Kelsey Jan 31 '22 at 00:27
2

First, let's properly define the problem. Given $N$ observations $\left \{ x_i \right \}^{N}_{i=1}$ whereas $x_i \overset{i.i.d.}{\sim} {\mathcal{N}}(0, \sigma^2), \forall i \in \left [ 1,2,\dots, N \right ]$ we wish to find the variance of the sample mean estimator compared to a single sample variance.

Obviously, a single sample variance is $\sigma^2$, now let's see how averaging the samples will decrease this variance:

\begin{equation} {\mathrm{Var}}\left [ \frac{1}{N} \sum_{i=1}^{N} x_i\right ]\overset{i.i.d}{=} \frac{1}{N^2} \sum_{i=1}^{N} {\mathrm{Var}}\left ( x_i \right ) = \frac{1}{N^2}N\sigma^2 = \frac{\sigma^2}{N}. \end{equation}

So the error is inversely proportional to the sample size $N$. In fact, just by adding one more sample you increase your SNR by 3dB.

Note that $\sigma^2/N$ is also the Cramer Rao (lower) Bound (CRB) for the sample mean in a linear Gaussian problem, so this is in fact the best unbiased estimator for this problem.

Dr. Nir Regev
  • 618
  • 5
  • 9
0

A slightly different calculation is: $\frac{S}{N}=\frac{s+s}{\sqrt{\sigma^{2}+\sigma^{2}}}=\sqrt{2}\cdot\frac{s}{\sigma}$

Where $\frac{s}{\sigma}$ is the signal/noise for each reading

Two signals add: $s_{1}+s_{2}$

The two noises add and normalize: $\sqrt{\sigma_{1}^{2}+\sigma_{2}^{2}}$

rrogers
  • 415
  • 3
  • 5