I have a experimental physics/statistical dilemma, mainly related to the uncertainty determination of the Plasma Electron Temperature when determined experimentally with a Langmuir Probe.
So the electron current model is given by
$$I_{e}(V_{b})=I_{e}^{\textrm{sat}}\cdot\exp{\left(\frac{V_{b}-V_{s}}{T_{eV}}\right)}$$
Where $I_{e}(V_{b})$ is the current measured when a bias potential $V_{b}$ is applied to the probe's tip. From the linearization of the electron current data, when a linear least-squares fit is applied the slope would be $1/T_{eV}$.
The Least-squares regression method has a way to determine the standard error associated to the slope $1/T_{eV}$, but, how can it be taken into account when multiple $1/T_{eV}$ are determined, so that the best uncertainty is estimated.