0

From what I have read, when measure repeatedly the same quantity X N times and the measurements follow a normal distribution the uncertainty of the mean is $σ_{mean} = \frac{σ}{\sqrt{N}}$ where σ is the standard deviation of the measurements.

Now let's assume that we don't measure the same thing all the time, but we have got a set of different measurements without autocorrelation of a parameter P that changes with time for a time period t. The question is how do we calculate the uncertainty propagated to mean value of the period t from the uncertainties of the measurements.

  1. If each measurement has the same uncertainty u (random error) and the dataset follows a normal distribution would that be correct to use $u_{mean} = \frac{u}{\sqrt{N}}$ ?
  2. If each measurement has a different uncertainty $u_{i}$ would that be correct to use the same formula like this: $u_{mean} = \frac{(1/Ν)\sqrt{\sum{u_i^2}}}{\sqrt{N}}$ ?
  • Please edit the question to limit it to a specific problem with enough detail to identify an adequate answer. – ZaellixA Mar 31 '22 at 15:04

1 Answers1

0

If I understand your question, you have different measurements for a random variable and wish to combine these measurements for an overall estimate of the variable and its uncertainty. If this is what you want, see my answer to Uncertainty in repetitive measurements on this exchange.

If you are after something different, please clarify and let me know and I will respond.

John Darby
  • 9,351