1

I have a magnetometer, a LIS3MDL to be precise, and I am taking readings from it every second. As expected there is variation in the readings. For example, if I take five readings I get:

1164, 1190, 1270, 1186, 1260

Now maybe five samples is enough to average over, maybe it is not. How do I know this? Is there a way to use the FFT of a large number of samples to work out the optimum number of samples to take?

arb01234
  • 13
  • 2

2 Answers2

2

When you take many samples and calculate an FFT, you likely find that noise is flat in the higher portion of the spectrum and for lower frequencies eventually noise will rise proportional to √f or even faster.

The important bit is that averaging longer than the inverse corner frequency does not improve the accuracy of the reading any more.

Therefore, one sweetspot for the optimum number of samples to average is to accumulate samples for a time equal to the inverse corner frequency, which will provide maximum precision.

If you need reading faster, you can use fewer readings at the expense of slightly higher noise.

tobalt
  • 468
  • 2
  • 12
  • "averaging longer than the inverse corner frequency does not improve the accuracy" - can you point me to a reference that demonstrates this? I believe you; but I would like to understand why. – Ed Graham Dec 13 '23 at 19:12
  • 1
    @EdGraham This is pretty relevant, especially the link in its first sentence. For some context, you can also check this question, including the reference and the answer. – tobalt Dec 13 '23 at 20:16
1

The optimum number of samples to average over for any series of data which may become non-stationary over longer time intervals is readily determined by using the "Allan Deviation" (ADEV) and "Allan Variance" (AVAR). ADEV and AVAR are typically used in the evaluation of oscillator stability since that is an inherently non-stationary process, but it can also be used to evaluate the optimum FFT duration as I have described in further detail here specific to using this with an FFT. A plot of the Allan Deviation shows the standard deviation of the difference $\tau$ between two blocks of samples averaged over $\tau$ seconds and will be minimum at the length of $\tau$ corresponding to the optimum averaging interval. Over this duration of time the waveform statistics are sufficiently stationary such that averaging will minimize the estimate of the mean.

I go over this in more detail at the following posts:

Is Allan variance still relevant?

How to interpret Allan Deviation plot for gyroscope?

Allan Variance vs Autocorrelation - Advantages

How to interpret ADEV plot?

Dan Boschen
  • 50,942
  • 2
  • 57
  • 135