$I(x) = I_0 \cdot e^{~-u \cdot x}$
Where u is the linear attenuation coefficient
And how does this relate to the following
$N(x) = N_0 \cdot e^{~-u \cdot x}$
Where N is the count rate of the beam typically measured by a GM counter.
$I(x) = I_0 \cdot e^{~-u \cdot x}$
Where u is the linear attenuation coefficient
And how does this relate to the following
$N(x) = N_0 \cdot e^{~-u \cdot x}$
Where N is the count rate of the beam typically measured by a GM counter.
A gamma photon is either absorbed or carries on through matter without being changed. This implies that the probability, $K$, per unit distance travelled of a gamma being absorbed is independent of how far the gamma has travelled. So we have $$\frac{(–dN)/N}{dx}=K \ \ \ \ \ \ \text{that is}\ \ \ \ \ \ \frac{dN}{dx}=\ –KN$$ Integration leads to the equation you have quoted.
Alphas and betas lose their kinetic energy bit by bit is a series of ionisation events, so the further they have travelled, the slower they get. Eventually they stop. This finite range is quite unlike the exponential fall-off in gamma intensity with distance.
Why does the formula for decreasing intensity of radiation only work for gamma rays?
Incorrect. Beer–Lambert law works for many radiation types, including photons (gamma rays, visible light, ...), neutrons, etc. General form of Lambert law sounds : $$ A = \varepsilon \ell c $$ Where $A$ is absorbance, $\varepsilon$ is the molar attenuation coefficient, $\ell$ is the radiation path length, and $c$ is is the concentration of the attenuating species.
Absorbance can be defined as : $$ A=\ln\left(\frac {I_0}{I}\right) $$ Where $I_0$ is incident ray intensity, $I$ - transmitted ray intensity. Substituting this into equation above and using logarithm rules for expressing logarithm argument and making a couple adjustments, gives : $$ I=I_0 e^{-\varepsilon~ \ell ~c} $$
This is in case attenuation coefficient is uniform in all media, otherwise one needs to integrate with respect to $\varepsilon(\ell)$.