5

Let $X_1, X_2, ...$ be independent random variables.

Define $$\mathscr{T}_n = \sigma(X_{n+1}, X_{n+2}, \ldots)$$ and $$\mathscr{T} = \bigcap_{n} \mathscr{T}_n,$$ the tail σ-algebra of $(X_1, X_2, \ldots)$.

Are $\sigma(X_1), \sigma(X_2), ...$ independent of $\mathscr{T}$?

If so, why?

If not, why, and what about $$\sigma(X_1), \sigma(X_2), ..., \sigma(X_k) \ \;\forall k \in \mathbb{N}\quad?$$

All I got so far is that if $X_1, X_2, \ldots$ were events instead of random variables, $X_1, X_2, \ldots, X_k \ \forall k \in \mathbb{N}$ would be independent of some events in $\mathscr{T}$ such as $\limsup X_n$.

BCLC
  • 13,459
  • As an aside, what happens now to @MichaelHardy's nice answer, which presents an example quite relevant to the question as it was before you modified it? – Did Jul 24 '15 at 16:20
  • 1
    The Kolmogorov 0-1 law exactly answers this question. It asserts that the tail sigma field is almost trivial. So every event in it has probability 0 or 1, meaning that the event is independent of everything. – Nate Eldredge Jul 24 '15 at 17:31
  • @NateEldredge When you say 'the event' do you mean 'every event in it' ? – BCLC Jul 25 '15 at 09:10
  • @NateEldredge Cool observation, but I think K 0-1 Law isn't the same in all textbooks. Some are defined in terms of events rather than random variables. – BCLC Jul 25 '15 at 09:13

2 Answers2

4

[This answers the original edition of the question, which did not assume $X_1,X_2,X_3,\ldots$ are independent.]

No. Suppose $R\sim\mathrm{Uniform}(0,1)$ and $(Y_1,Y_2,Y_3,\ldots)\mid R\sim\mathrm{i.i.d. Bernoulli}(R)$.

Then the strong law of large numbers implies that $$ \Pr\left( \lim_{n\to\infty} \frac{Y_1+\cdots+Y_n} n= R \mid R\right) = 1. $$ So \begin{align} & \Pr\left( \lim_{n\to\infty} \frac{Y_1+\cdots+Y_n} n= R \right) \\[10pt] = {} & \operatorname{E} \left( \Pr\left( \lim_{n\to\infty} \frac{Y_1+\cdots+Y_n} n= R \mid R\right) \right) = \operatorname{E}(1) = 1. \end{align}

Let $X_n= (Y_1+\cdots+Y_n)/n$. The event that $\lim_{n\to\infty} X_n = r$ is in the tail sigma-algebra of $X_1,X_2,X_3,\ldots$. But $X_1$ is not independent of that event, since $\Pr(X_1=1\mid R=r)=r$.

If you want a tail event whose probability is positive, observe that $\lim\limits_{n\to\infty} X_n \overset{\text{a.s.}}= R \sim\mathrm{Uniform}(0,1)$, so $\Pr(X_1=1) = \operatorname{E}(\Pr(X_1=1\mid \lim\limits_{n\to\infty} X_n)) = \operatorname{E}(\lim\limits_{n\to\infty} X_n) = 1/2$, and find $\Pr(X_1=1\mid \lim\limits_{n\to\infty} X_n>1/2)$.

2

This is merely a slight extension of Nate's comment, but it should answer your question:

On $(\Omega,\mathcal{A},\mathbf{P})$ we have a sequence $(X_i)_{i\in \mathbb{N}}$ of independent RVs $X_i:(\Omega,\mathcal{A})\to(\Omega_i,\mathcal{A}_i)$ and the corresponding sequence $\sigma(X_i)_{i\in \mathbb{N}}\subset\mathcal{A}$ of independent $\sigma$-algebras, where we have $\sigma(X_i):=\{X^{-1}(A):A\in\mathcal{A}_i\}$.

Now we apply Kolmogorov's 0-1 law which states, that all events in the tail $\sigma$-algebra (also called terminal $\sigma$-algebra) $\mathcal{T}$ are trivial (i.e. almost sure happen or or almost sure don't) $$ A\in\mathcal{T}:\mathbf{P}(A)\in\{0,1\} $$ Now, this means that $\mathcal{T}$ is independent to the other $\sigma(X_i)_{i\in \mathbb{N}}$, since for $A\in\mathcal{T},B\in\sigma(X_i)$ it holds $$ \mathbf{P}(A\cap B)=\mathbf{P}(A)\mathbf{P}(B) \tag 1 $$ This must be true, because either we have $\mathbf{P}(A)=0$, which means - since $(A\cap B)$ is a subset of $A$ - that we have $\mathbf{P}(A\cap B)=0=\mathbf{P}(A)\mathbf{P}(B)$, or we have $\mathbf{P}(A)=1$. In the latter case we have because of $\mathbf{P}(A\cup B)=1$ again the stated equality of $(1)$ $$ \mathbf{P}(A\cap B)=\mathbf{P}(A)+\mathbf{P}(B)-\mathbf{P}(A\cup B)=1+\mathbf{P}(B)-1=\mathbf{P}(B)=\mathbf{P}(A)\mathbf{P}(B) $$ This proves that $\mathcal{T}$ is independent to the other $\sigma(X_i)_{i\in \mathbb{N}}$.

user190080
  • 3,701