As you can find in all the literature, the age of the universe can be computed easily $$t_0= (H_0)^{-1} = 14Gy$$
For the accepted values of Hubble constant $H_0$, but recently I read that this result takes the assumption of "constant expansion" (the source is in fact this one this), that is related to the density parameter $\Omega$,
$$\Omega = \frac{\rho}{\rho_c}$$
And this constant expansion we imposed to find the age of the universe happens when $\Omega = 0$, i.e. the universe is empty.
- Why is this taken as an approximation, if we know part of the universe is not empty for sure, approximately,
$$\Omega_m = 0.05 \quad \Omega_{DM}=0.25$$
$\qquad $ For matter and dark matter respectively, the 70% left is sufficient to neglect this density?
Also I found that for a universe with the exact critical density $\rho=\rho_c$, i.e. $\Omega =1$. A universe that have the exact necessary density to maintain the expansion and not collapse, the age of the universe will be,
$$t_0=\frac{2}{3H_0}=8Gy$$
That is less than the accepted hypothesis ($14Gy$).
- How can this expression $\frac{2}{3H_0}$ be found? And how is related to the expansion rate $H$?
I don't know if I mixed some concept wrong, and I will be thankful for any help.