1

I have been wondering what's the intuition behind a well known result: $E(XY) = E(X) E(Y) $ for independent random variables $X,Y$

I found this post: here which kinda solves the problem.

But, the explanation given there seems to be not clear enough for me.

What I think: Without loss of generality, we know that besides independence we can assume that both random variables, $X$ and $Y$ are simple random variables, and so, it is possible to represent them as, i.e. taking X first:

$X = \sum^n_{i=1} a_i 1_{A_i}$, then compute the product $XY$ and take expectation.

But could somebody please explain the intuition behind it to me?

I really want to get the notion of how to understand the result of that post (which i believe is correct)

Thank you all guys.!

kentropy
  • 538
  • 1
    Your idea is correct. If $X$ and $Y$ are simple, then the identity is easily proved and this is what is done in the link. For the general case, any random variable can be approximated by simple random variables and the result follows by limiting argument. – Sangchul Lee Nov 19 '17 at 00:13
  • 2
    For a different perspective, it may be helpful to know the concept of conditional expectation. In general we have $$\mathbb{E}[XY] = \mathbb{E}[\mathbb{E}[Y\mid X]X]$$ which tells that expectation of $XY$ can be computed by first averaging $Y$ over given information of $X$ and then taking unconditional average. But if $X$ and $Y$ are independent, $\mathbb{E}[Y\mid X]=\mathbb{E}[Y]$ and thus the identity follows. – Sangchul Lee Nov 19 '17 at 00:14
  • I see. Thanks for your replies. I guess for the last part you meant $E[Y|X] = E[Y]$ for $Y$ independent of $X$. – kentropy Nov 19 '17 at 00:19

2 Answers2

2

It is hard to give precise answer since you are asking for intuition.

Suppose for a certain number b you will compute bX. What’s the expected value of this computation? Well, if the realization of the variable X was done independently of the choice of the number b, then your computation will produce on average b.EX. Now make b random...

user334639
  • 1,566
  • Thanks for your reply. Yeah, I see. That's the general intuition behind independence of events which as I see, can be easily focused on expectation.

    However, when I wrote my question, my primary intention was of thinking on mappings, like, consider this:

    we know that the inverse map of X is $ { \omega \in \Omega: X^{-1}(B \in \mathcal{B}(\mathbb{R} ) }$ is a set in $\Omega$ and something equivalent for Y, such that these two sets are independent in $\Omega$ Then, the expected value of the mapping of these two sets are independent. So I'm wondering whether this logic is correct?

    – kentropy Nov 19 '17 at 00:22
1

We know that

$E[XY]=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}xyf_{XY}(x,y)dxdy$

If the RVs are independent then

$E[XY]=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}xyf_{X}(x)f_{Y}(y)dxdy \\ \qquad \quad=\int_{-\infty}^{\infty}xf_{X}(x)dx\int_{-\infty}^{\infty}yf_{Y}(y)dx \\ \qquad \quad = E[X]E[Y]$

For the two jointly distributed Gaussian $X,Y$ RVs it means uncorrelatedness $(\rho_{xy}=0,\Sigma = \mathrm{diag}\{\sigma_{x}^2,\sigma_{y}^2\})$ $(f_{XY}(x,y)=f_{X}(x)f_{Y}(y))$ implying independence (Not true for other densities). But it tells us that one variable does not contain information about the other random variable.