2

Signal processing often involves managing and optimizing Signal to Noise Ratio (SNR). To achieve this, one may need to work with noise budgets and ensure that any additional additive noise sources are kept x dB below the current noise floor, with x typically in the range of 10 to 20 dB. These additive noise sources can include quantization noise resulting from an A/D converter or subsequent datapath truncation, aliasing caused by resampling operations, phase noise, front-end noise (which is usually factored into cascaded noise figure calculations), and any other additive noise sources- each reducing SNR.

In the process of having done this, I have committed to memory that adding additional noise 10 dB below the noise floor will result in increasing the noise floor by 0.4 dB, and 20 dB below will result in an increase of only 0.04 dB. We see this by converting the dB quantities to power terms, summing them and converting back to dB:

$$P_\Delta = 10\log_{10}(1+10^{-\text{dB}/10})$$

Where dB is the power in dB the added noise is below the existing noise floor.

However notice this interesting pattern from the formula given above:

$\text{ dB} \space\space\space\space\space\space\space\space\space\space\space P\Delta$
$-10 \text{ dB}: 0.414 \text{ dB}$
$-20 \text{ dB}: 0.0432 \text{ dB}$
$-30 \text{ dB}: 0.00434 \text{ dB}$
$-40 \text{ dB}: 0.000434 \text{ dB}$
$-50 \text{ dB}: 0.0000434 \text{ dB}$

Notice the result converges to $\approx 0.434\times10^{-n}$ as n increases (with -10 dB for n=0, -20 dB for n=1, etc). What is the exact answer for the result as n goes to infinity and what other common approximation can we relate this to? The answer I am looking for is not zero, but $x$ in $x\times10^{-n}$ as $n\rightarrow\infty$, together with an explanation as to how $x$ comes into this. The correct answer is exact and in a form that uses no numerical digits (no numerals 0 through 9).

This is a “DSP Puzzle”, please preface your answer with spoiler notation by typing ythe following two characters first ">!"

Dan Boschen
  • 50,942
  • 2
  • 57
  • 135
  • 1
    BTW, it's not directly what this question is about, but the notion of adding independent noise before quantization has a name and a history and we call it "dither". The current doctrine in the audio world that adding (before quantization) triangular p.d.f. dither having width of $2 \Delta$ (where $\Delta$ is the quantization step size), will add 4.77 dB noise power but will completely decouple both the first and second moments of the total quantization error from the signal being quantized. So there can be no noise power modulation of the total quantization noise. – robert bristow-johnson Apr 19 '23 at 21:20
  • @Robert That's a great comment, unrelated to this post but definitely applicable to this one....could you copy it to there as well as I think it adds value https://dsp.stackexchange.com/a/31901/21048 – Dan Boschen Apr 20 '23 at 02:05
  • @robertbristow-johnson Another term for "dither" is stochastic resonance. – David Apr 20 '23 at 15:37
  • 1
    It's one that I hadn't heard. Putting the two terms together, "stochastic" and "resonance" sounds a lot like, what we call, *noise shaping. Noise shaping can occur either with dithered quantization or* with undithered quantization, an example is depicted here. – robert bristow-johnson Apr 20 '23 at 16:43

2 Answers2

2

The answer I was looking for was $\log(e)$ only because of it's perceived mathematical simplicity (and no use of the numerals 0 through 9 as I described) and it's direct relationship to the useful approximation $\ln(1+x) \approx x$ for small $x$. Obviously other than the factor of 10 likely due to confusion with my description, this is the same answer as Matt's without any real superiority. We can convert between the two answers readily using the following $\log$ to $\ln$ conversion: $$\ln(x) = \frac{\log(x)}{\log(e)}$$ Where $\log$ refers to a base 10 logarithm and $\ln$ refers to a base e logarithm.

This is how I came to the result, for large n: $$10\log(1+10^{-n})\approx 4.34 \times 10^{-n}$$ $$\log(1+10^{-n})\approx .434 \times 10^{-n}$$ As demonstrated in the post, the above appears to converge to something close to 0.434 and the question was what does it converge to for large $n$? So with that we have: $$\log(1+10^{-n})= x \times 10^{-n}$$ Convert to $\ln$ to use the simpler convergence of $\ln(1+x)$ for small $x$:$$\frac{\log(1+10^{-n})}{\log(e)}= \frac{x \times 10^{-n}}{\log(e)}$$ To get $$\ln(1+10^{-n})= \frac{x \times 10^{-n}}{\log(e)}$$ For large $n$, $10^{-n}$ is a small number so we can use the relation $$\lim_{n\rightarrow0}\ln(1+\alpha)=\alpha$$ Resulting in, as $n\rightarrow\infty$:$$\ln(1+10^{-n})= 10^{-n}=\frac{x \times 10^{-n}}{\log(e)}$$ Thus: $${\log(e)}\times10^{-n} = x \times 10^{-n}$$ and we have the result $$x = \log(e)$$

And of course we can convert this to or from Matt's answer to equivalently get $x=\frac{1}{\ln(10)}$

Dan Boschen
  • 50,942
  • 2
  • 57
  • 135
  • Thx for providing your answer! Ok, now I understand what you meant. It's the same of course. Nice, nobody said it wasn't interesting! :) – Matt L. Apr 20 '23 at 06:36
  • @MattL Yes the problem is interesting in how the solution converges to very factor that converts between log10 and loge. There's must be a very "oh duh" reason why that comes down to why e shows up everywhere as the natural base for exponential growth but I haven't fully made that (intuitive) connection yet. I'm not sure the result log(e) vs 1/ln(10) is all that more interesting and I missed that you used the ln(1+x)~x approximation so thought incorrectly I was adding something new here. Thanks for your nice answer. – Dan Boschen Apr 20 '23 at 23:48
1

Let $\sigma_x^2$ and $\sigma_n^2$ be the signal and noise power, respectively. If (independent) noise with power $c\cdot\sigma_n^2$ is added to the existing noise, the resulting SNR becomes $$\textrm{SNR}=\frac{\sigma_x^2}{(1+c)\sigma_n^2}\tag{1}$$ In decibels we have $$\textrm{SNR}_{dB}=10\log\frac{\sigma_x^2}{\sigma_n^2}+10\log\frac{1}{1+c}\tag{2}$$ where the first term on the right-hand side of $(2)$ is the original SNR, and the second term is the change in SNR due to adding noise with a power that is a fraction of the original noise power.

With $c=10^{-n}$ and with the approximation $\ln(1+x)\approx x$ for small $x$, the change in $\textrm{SNR}_{dB}$ can be written as $$\begin{align}10\log\frac{1}{1+10^{-n}}&=-10\log(1+10^{-n})\\&=-\frac{10}{\ln (10)}\ln(1+10^{-n})\\&\approx-\frac{10}{\ln (10)}10^{-n},\qquad n\gg 1\tag{3}\end{align}$$

So for any additional noise with $10n$ dB less power than the original noise, the SNR is decreased by approximately $10/\ln (10)\cdot 10^{-n}\approx 4.342944819\cdot 10^{-n}$ dB. This approximation becomes better for increasing values of $n$.

Matt L.
  • 89,963
  • 9
  • 79
  • 179
  • Sorry I wasn't very clear, see my update to the last paragraph - I think the answer in the form I suggest is (slightly) more interesting in that it relates directly to another common approximation I was also thinking of. – Dan Boschen Apr 19 '23 at 12:34
  • @DanBoschen: Hmm, but the number in my answer matches perfectly with your result, multiplied by $10^{-n}$, of course. I'll write up a complete answer including the derivation so you can see where my result comes from. – Matt L. Apr 19 '23 at 12:54
  • Yes, you aren’t wrong I was just looking for a different and equivalent result but also made it clear that it should be approximately .43- but as in my update, what is this using no numerals? – Dan Boschen Apr 19 '23 at 13:10
  • @DanBoschen: I've completed my answer. I'm still not clear about the result you expected, but at least numerically it should be identical ... Maybe someone else can chime in. – Matt L. Apr 19 '23 at 16:55
  • $\frac{10}{\log(10)} = 4.342944819$ seems to me to pretty much nail this coffin shut. – robert bristow-johnson Apr 19 '23 at 21:16
  • an additional (+1) if I could for clarifying independent noise and somehow I missed when I first saw this that you did make use of the approximation I was thinking of (so the approach I took wasn't really as different as I thought). The factor does indeed converge to exactly the inverse of ln(10) for n at infinity (while the result also does go to zero) so it is more than just an approximation. Can the result in the limit as x goes to 0 be both $x=0$ AND $x = \log(e) \cdot 10^{-n}$? Seems to hold up. – Dan Boschen Apr 20 '23 at 10:56
  • Yes, in fact we did exactly the same. We just expressed that constant in two different but equivalent ways because $\log e\cdot\textrm{ln}10=1$. The limit for $n\to\infty$ is of course zero, but the approximation $\log e\cdot 10^{-n}$ is asymptotically equivalent to the exact result for $n\to\infty$, i.e., for large $n$ the exact result behaves as $\log e\cdot 10^{-n}$ (and the limit of both is of course zero). – Matt L. Apr 20 '23 at 11:39
  • 1
    @DanBoschen: In more mathematical terms, if $f(n)$ is the exact value and $g(n)$ is the approximation, we have $\lim_{n\to\infty}f(n)/g(n)=1$. In this case $f(n)$ and $g(n)$ are said to be asymptotically equivalent. – Matt L. Apr 20 '23 at 11:45
  • @MattL.good distinction. Thanks for that – Dan Boschen Apr 20 '23 at 11:48