I wonder how absorption spectra of a sample (e.g. gas with some $CO_2$ absorbing strongly at $\approx 15\mu m$) is measured by IR spectroscopy, thereby having in mind the following arrangement, consistent of a source, a detector and a sample of optical thickness $\tau$ for a given wavelength:
There is a general radiation law I learned (*). It says, that (without scattering) spectral intensity after passing a medium is the original intensity attenuated by absorption plus the emitted radiation from the sample itself:
$$ I_\nu (\tau) = I_\nu (0) \cdot e^{- \tau } + I_\nu^B\cdot \left(1-e^{-\tau} \right) \tag{1}$$
where $I_\nu^B$ is the Planck radiation intensity from the sample at a given temperature.
Now, for weak absorption $\tau \ll 1$ this can be approximated as
$$ I_\nu (\tau) = I_\nu (0) \cdot (1- \tau ) + I_\nu^B\cdot \tau \tag{2}$$
so what I would measure as the drop in intensity at the detector would be
$$ I_\nu (0) - I_\nu (\tau) = (I_\nu(0) - I_\nu^B) \cdot \tau \tag{3}$$
What I'm interested in is, however
$$ I_\nu(0) \cdot \alpha \cdot \Delta x = I_\nu(0) \cdot \tau \tag{4}$$
because this gives my absorption coefficient $\alpha$.
It seems to me, that the absorption is always a little bit underestimated because of the black-body part. So how do I get rid of it? Is it just neglected? Or is there a flaw in my whole consideration?
(*) K. N. Liou, An Introduction to Atmospheric Radiation, Chapter 1.4
