0

I recently came in a work-group discussion where I think some concepts are misused. After seeking for scientific articles on the subject I have found that part of the problem is definition (what exactly is a LOD and how should we define it). But I cannot find a legitimate argument for a negative LOD when measuring a strictly positive quantity.

Let us define LOD in this simple way:

The smallest amount of analyte the system is able to detect with a reasonable confidence.

Or the way IUPAC does (even if it relies on a model):

The limit of detection, expressed as the concentration, $c_L$, or the quantity, $q_L$, is derived from the smallest measure, $x_L$, that can be detected with reasonable certainty for a given analytical procedure. The value of $x_L$ is given by the equation $x_L = \bar{x}_{bi} + k\cdot s_{bi}$ where $\bar{x}_{bi}$ is the mean of the blank measures, $s_{bi}$ is the standard deviation of the blank measures, and $k$ is a numerical factor chosen according to the confidence level desired.

We are monitoring air quality, so we essentially collect concentrations time series (CO, NO, etc.). We also calibrate (multi points calibration) our analysers with standard gas. And of course, we have to deal with some data that reach a "limit of detection", sometimes when concentrations are very low (I am not saying null) analysers return small, null or negative value.

In that work-group, some co-workers feel alright applying a negative limit of detection for concentrations and they argue - without any proof - that it is statistically possible even if there is no physical constancy.

The way I feel is the following: I never will use time series when assessing LOD. We have no control on concentrations and interferences, and beyond: random variable that we are measuring are nor stationary, neither IID. Instead, I will use calibration data and I will perform lower and lower concentrations until the variability of measurement buries the signal. I will not try to modelize the uncertainty, I will assess it experimentally. I always work on raw signal and I convert it to concentration using the calibration curve. It allows us to detect when linearity drops and if variability is constant on the dynamic range.

Someone who cares about data and the way they have been acquired will know that LOD is a mean not a goal (assumption may not hold). And experimental LOD definition strongly depends on the analytical method that is used.

So, here are my questions:

  • When dealing with statistics on real-world applications, statistics must agree with physics and not the opposite. If there is inconsistency, you must check your statistics hypothesis and assumption and revise your model. Is that a correct thought?
  • How can we accept a negative LOD for concentration? Will a Physicist accept a negative LOD for a temperature measurement in K?
  • If some scientific and peer reviewed literature exists about negative LOD for concentration, can you provide me references.

I will appreciate your help.

jlandercy
  • 255
  • There's nothing fundamentally wrong with calculating the confidence interval for some positive-definite random variable and discovering that the confidence interval dips into the negative. Usually this is just an artifact of applying a normal distribution to a not-quite-normal population. Dunno if that helps in your case. – Carl Witthoft Feb 10 '14 at 12:45
  • This not about confidence interval, such as is null concentration possible for a given confidence threshold. It is about how should we truncate data serie knowing that negative value are non sense for concentration and the analyzer cannot detect low concentration outside its dynamic range because signal is buried in noise and calibration curve is likely to drift in this area. Thanks anyway for answering. – jlandercy Feb 10 '14 at 14:59
  • I'm not convinced you're on the right track. Here's an example: you can't detect negative numbers of photons, but electronics noise following a solid-state sensor will lead to voltages which correspond to "negative" inputs. Rejecting those samples will bias your data in undesirable ways. – Carl Witthoft Feb 10 '14 at 15:48
  • Instrument are real, they are not perfect, and they are designed to work on a defined dynamic range (which, in my field, rarely includes 0): in that zone signal is not totally dominated by noise, and working hypothesis holds. What quality will have your data (eg. your count of photon) if your signal is so close to a "limit" that your signal is buried in noise (negative photons has a sufficient magnitude to totally destroy your signal)? Of course truncating bias data, but use of meaningless data bias too. – jlandercy Feb 10 '14 at 16:27
  • Feldman and Cousins suggest a method to manage confidence intervals near a hard boundary: http://arxiv.org/abs/physics/9711021. This method is pretty popular in experimental particle physics. – dmckee --- ex-moderator kitten Feb 10 '14 at 16:31
  • @ dmckee Thank your for the reference, I will print it and update my sources. – jlandercy Feb 10 '14 at 16:36
  • I'm not sure if you have tried this yet, but you might want to see what the people at cross-validated.se say. – Brian Moths Feb 10 '14 at 19:06

0 Answers0