0

Suppose $U$ is uniformly distributed on $[0,1]$, $X_1,X_2$ are identically distributed and non-negative random variables. Assume that $U,X_1,X_2$ are independent. I was asked to calculate $P(U<X_1|X_1+X_2=1)$.

(1) I am confused that the event $\{X_1+X_2=1\}$ could have probability 0, i.e. $P(\{X_1+X_2=1\})=0$, then in that case how to define the conditional probability mentioned above?

Answer: I had a misunderstanding towards the concepts of conditional probability. Check https://en.wikipedia.org/wiki/Conditioning_%28probability%29#Conditional_probability_2 for instance.

(2) I had a "feeling" that the answer should be $\frac{1}{2}$ but I am not able to give a formal proof. Is this answer correct and how to prove it?

Answer: Using the definition in wikipedia, the problem is simple. The answer (according to the text I'm reading) is $0.5$.

INvisibLE
  • 188
  • 1
  • 6
  • "probability 0" and "impossible" are not the same. The conditional probability must of course refer to an event that can occur. – Peter Sep 10 '23 at 13:32
  • This is not clear. We need information regarding the distribution of the $X_i$. – lulu Sep 10 '23 at 13:32
  • @Peter I've edited the question a little bit. – INvisibLE Sep 10 '23 at 13:38
  • OK, now the question makes sense. – Peter Sep 10 '23 at 13:40
  • @lulu The distribution of $X_i$ is not given in the original problem. Maybe you can take some special cases: $X_i$ are (1) uniform on $[0,1]$; (2) exponentially distributed (the pdf is given by $f_X(x)=ce^{-cx}1_{[0,\infty)}(x)$). – INvisibLE Sep 10 '23 at 13:42
  • I don't see what sense the problem makes without information on the $X_i$. In any case, conditioning on events of probability $0$ tends to be situation specific. See this question for instance. – lulu Sep 10 '23 at 13:43
  • If the distribution of $X_i$ has a pdf, then the definition in the answers in the question you mentioned can be applied. What if the distribution of $X_i$ does not have a pdf? – INvisibLE Sep 10 '23 at 13:51
  • This question needs further specification. There is no canonical way to approximate a set of zero measure, and each way will generally lead to different answers. – Andrew Sep 10 '23 at 14:09
  • @Andrew You're right, thanks. – INvisibLE Sep 10 '23 at 14:11
  • @INvisibLE https://math.stackexchange.com/questions/4747274/probability-that-kth-order-statistic-is-s-given-one-of-the-values-is-s#comment10072309_4747274 for further discussion – Andrew Sep 10 '23 at 14:12
  • @Andrew Thanks. That example really makes sense to me. – INvisibLE Sep 10 '23 at 14:15

1 Answers1

1

We have: $$\begin{align} L&:=\lim_{\epsilon\to0}\mathbb{P}(U\le X_1 |\{1-\epsilon \le X_1+X_2 \le 1 \})\\ &=\lim_{\epsilon\to0}\frac{\mathbb{E}(\mathbf{1}_{\{U\le X_1\}}\cdot \mathbf{1}_{\{1-\epsilon \le X_1+X_2 \le 1 \}})}{\mathbb{E}( \mathbf{1}_{\{1-\epsilon \le X_1+X_2 \le 1 \}})}\\ &=\lim_{\epsilon\to0}\frac{\mathbb{E}(\mathbb{E}(\mathbf{1}_{\{U\le X_1\}}\cdot \mathbf{1}_{\{1-\epsilon \le X_1+X_2 \le 1 \}}|X_1))}{\mathbb{E}(\mathbb{E}( \mathbf{1}_{\{1-\epsilon \le X_1+X_2 \le 1 \}}|X_1))}\\ &=\lim_{\epsilon\to0}\frac{\mathbb{E}(\mathbb{E}(\mathbf{1}_{\{U\le X_1\}}|X_1)\cdot\mathbb{E}( \mathbf{1}_{\{1-\epsilon - X_1 \le X_2 \le 1-X_1 \}}|X_1))}{\mathbb{E}(\mathbb{P}( 1-\epsilon - X_1 \le X_2 \le 1-X_1|X_1))}\\ &=\color{red}{\lim_{\epsilon\to0}\frac{\mathbb{E}(\min \{{X_1,1}\} \cdot P_X(\max\{{1-X_1,0}\})-P_X(\max\{{1-X_1-\epsilon,0}\}))}{\mathbb{E}( P_X(\max\{{1-X_1,0}\})-P_X(\max\{{1-X_1-\epsilon,0}\}))}}\\ \end{align}$$

Take for example, $X_1$ and $X_2$ follow the uniform distribution $\mathcal{U}(0,1)$, then $$\begin{align} L&:=\lim_{\epsilon\to0}\frac{\mathbb{E}(X_1 \cdot (1-X_1-(1-X_1-\epsilon)^{+}))}{\mathbb{E}( 1-X_1-(1-X_1-\epsilon)^{+})}\\ &=\lim_{\epsilon\to0}\frac{\mathbb{E}(X_1(1-X_1)\mathbf{1}_{\{X>1-\epsilon\}}-\epsilon X\mathbf{1}_{\{X \le 1-\epsilon\}})}{\mathbb{E}((1-X_1)\mathbf{1}_{\{X>1-\epsilon\}}-\epsilon \mathbf{1}_{\{X \le 1-\epsilon\}})} \\ &= \lim_{\epsilon\to0} \frac{\int_{1-\epsilon}^1x(1-x)dx-\epsilon\int_{0}^{1-\epsilon}xdx}{\int_{1-\epsilon}^1(1-x)dx-\epsilon\int_{0}^{1-\epsilon}dx}\\ &= \frac{-\frac{13}{2}\epsilon+\frac{3}{2}\epsilon^2-\frac{5}{6}\epsilon^3}{-\epsilon+\frac{3}{2}\epsilon^2}\\ \color{red}{L}& \color{red}{=\frac{1}{2}} \end{align}$$

The case $X_1$ and $X_2$ follow the exponential distribution $\mathcal{E}(c)$, I let you calculate the analytical result. The result must be equal to: $$L:=\frac{\mathbb{E}(X\cdot \mathbf{1}_{\{X \le 1\}} \cdot ce^{-cX})}{\mathbb{E}( \mathbf{1}_{\{X \le 1\}} \cdot ce^{-cX})}=\color{red}{\frac{\int_0^1cxe^{-cx}dx}{\int_0^1ce^{-cx}dx}}$$

NN2
  • 15,892
  • 1
    The second line is wrong, you are dividing by zero – Andrew Sep 10 '23 at 13:57
  • No it is zero. The denominator is $E(1(X_1+X_2=1))$ – Andrew Sep 10 '23 at 14:00
  • Yes, there clearly must be an error because the original number is zero. – Andrew Sep 10 '23 at 14:04
  • If $X_1,X_2$ are independent continuous, then so is $X_1+X_2$ and ${1}$ has lebesgue measure zero – Andrew Sep 10 '23 at 14:06
  • @Andrew I don't find any error in my answer. But I'm pretty sure that, the result must be as I showed. Another way to prove this is replace the condition ${ X_1 + X_2 = 1}$ by $\lim_{\epsilon \to 0} {X_1 + X_2 \in (1, 1+\epsilon) }$ and we can obtain the same result. And I will be really happy if you can show me the error in my current proof. – NN2 Sep 10 '23 at 14:14
  • @Andrew Another approach is used for the same result. – NN2 Sep 10 '23 at 20:54