0

I know that the error probability for binary antipodal signaling is $Q(\sqrt{2E_b/N_0})$ with $Q$ the tail function.

I was wondering if when used in a m-ary scheme how the probability error would change. What I mean by that would be a signaling where we still have only 2 antipodal codewords, but in a ${\rm I\!R^m}$ dimension domain, i.e. $c_1$ = +$\sqrt{E_b/m}$(1,..1) $c_2$ = -$\sqrt{E_b/m}$(1,..1)

Thank you for reading, and have a great day.

james
  • 5
  • 2

3 Answers3

1

Let the received vector $\vec{y}$ be given as $$\vec{y} = \vec{x}+ \vec{w}$$ where $\vec{x}$ can be any of $c_1$ or $c_2$. The ML detection rule for deciding $c_1$ is given as $$ \frac{pdf_Y(y;c_1)}{pdf_Y(y;c_2)} > 1\tag{1}$$ $$pdf_Y(y;c_1) = (2\pi)^{-M/2}(\sigma^{-M})e^{-\frac{\Vert \vec{y} - \vec{c_1} \Vert^2}{2\sigma^2}}\tag{2}$$ similary $pdf_Y(y;c_2)$ is $$pdf_Y(y;c_2) = (2\pi)^{-M/2}(\sigma^{-M})e^{-\frac{\Vert \vec{y} - \vec{c_2} \Vert^2}{2\sigma^2}}\tag{3}$$ using $c_1 = -c_2$ and (2), (3) in (1) taking log of both sides and simplifying we get the detection rule for deciding $c_1$ as $$ \vec{y}^Tc_1 > 0$$.

Now an error will happen when $c_2$ gets transmitted but $ \vec{y}^Tc_1 > 0$.

Therefore we have $$ (c_2 + \vec{w})^Tc_1 > 0$$ Now $z= \vec{w}^Tc_1$ is a gaussian random variable with mean 0 and variance $\frac{E_b}{M}M\sigma^2 = E_b\sigma^2$. Therefore we have $z > \Vert c_1\Vert^2$. Now normalizing $z$ with standard deviation of $z$ (to convert it standard normal with standard deviation 1 and mean 0) we have $$\frac{z}{\sqrt(E_b\sigma^2)} > \frac{\Vert c_1\Vert^2}{\sqrt(E_b\sigma^2)}$$ Now $\Vert c_1\Vert^2 = E_b$ therefore we have probability of error as $$Q(\sqrt\frac{E_b}{\sigma^2})$$ since $\sigma^2 = \frac{N_o}{2}$ (per unit bandwidth). Therefore the error probability is $$Q(\sqrt\frac{2E_b}{N_o})$$

The assumptions are that noise in $\vec{w}$ are independent and identically distributed.

For the simple case of binary antipodal signal we have an energy per bit of $E_b$ and noise in 1 dimension. Now for the extended case we have energy per bit of $E_b/M$. Hence the expression above

Note: If the noise along each dimension is not IID, then we would not have this same expression even though eucledian distance between the signal points remains the same. But you can use the approach I presented here to compute the bit error rate.

Dsp guy sam
  • 2,610
  • 5
  • 18
0

The probability of error will be same since you increased dimensionality by $m$ but reduced energy by $m$. Remember, the two constellation points $\pm\sqrt{\frac{E_b}{m}}[1,1,..,1]$ are still on a line in $m$-dimensional hyper-space with noise power $N_0/2$. Nothing else has changed (the symbols are equally probable) so

$P_e=Q(\sqrt{\frac{2E_s}{N_0}}) = Q(\sqrt{\frac{2m(E_b/m)}{N_0}})=Q(\sqrt{\frac{2E_b}{N_0}})$

jithin
  • 2,263
  • 2
  • 9
  • 16
0

Another way to write $Q\bigg(\sqrt{\frac{2E_b}{N_0}}\bigg)$ is in terms of minimum distance as $Q\bigg( \sqrt{\frac{d_{min}}{2N_0}}\bigg)$. The minimum distance for the two dimensional antipodal case is $d_{min, 2}=2\sqrt{E_b}$, and the minimum distance for the $M$ dimensional case can be computed. If we call the two symbols as $\mathbf{s}_1=\sqrt{\frac{E_b}{M}}\big[1, 1, .., 1\big]^T$ and $\mathbf{s}_2=-\sqrt{\frac{E_b}{M}}\big[1, 1, .., 1\big]^T$, the $d_{min, M}$ is found by computing the distance between them, since there are only two symbols this is the minimum distance.

\begin{align} d_{min, M} &= \sqrt{\sum_{i=1}^M \big(s_{1, i} - s_{2, i} \big)^2}\\ &=\sqrt{\sum_{i=1}^M \bigg(\sqrt{\frac{E_b}{M}}+\sqrt{\frac{E_b}{M}}\bigg)^2}\\ &=\sqrt{\sum_{i=1}^M \bigg(2\sqrt{\frac{E_b}{M}}}\bigg)^2\\ &=\sqrt{4M\frac{E_b}{M}}\\ &=2\sqrt{E_b}=d_{min, 2} \end{align}

You can see the $M$ cancels out so it does not change the minimum distance or the probability of error and $d_{min, 2}=d_{min, M}$.

Engineer
  • 3,032
  • 1
  • 8
  • 16