Most often when, in a numerical problem, it is demanded that we calculate the accuracy of the final result, we write the final result in terms of the error. So I want to know if, in a measurement, there is 10% error, can we convey the same information by saying that the measurement has 90% accuracy?
-
10Then what do you say when your error is 200%? When you say +/-5% error that is directly referenced to the value you are giving. When you say 95% accuracy, that 95% isn't actually referenced to the value you are giving. It's 95% of an number not given. It doesn't say what direction it is in either unless you gives two numbers 95% and 105%. But then 105% accurate doesn't make sense, and neither does +/-95% accurate. – DKNguyen Feb 18 '22 at 05:11
-
4I don't think you're using error or accuracy correctly here , as both compare a measurement to the true, correct value. You may have uncertainty and some range of precision around your final result, but it's entirely possible that your reported value is exactly correct with no error at all, and is 100% accurate. – Nuclear Hoagie Feb 18 '22 at 14:47
-
3@NuclearHoagie The more common scenario, unfortunately, is a series of precise (a.k.a. repeatable) measurements which turn out to be inaccurate because of some "systematic error." For example, imagine a tailor who has used the same cloth measuring tape every day for many years, so that the cloth has stretched and the tape is longer than it used to be. Not a problem if the same measuring tape is used for all measurements. But if the tailor hires an apprentice who uses new equipment, a 30-inch waist from the apprentice might be more snug than a 30-inch waist from the master. – rob Feb 18 '22 at 16:09
-
@rob Agree, good example of precision vs. accuracy. But I'll note that the tailor alone can only estimate the precision and uncertainty in his measurements, he'll never realize he's inaccurate without the apprentice. The tailor alone can never measure his accuracy or error. Calculating a numerical result and finding the error means you already knew the answer to begin with, making the calculation an academic exercise in the first place. Systematic error cannot be computed from the measurements alone, it requires an oracle, in which case you don't need to measure at all. – Nuclear Hoagie Feb 18 '22 at 20:37
-
1@NuclearHoagie That depends on the ingenuity of the tailor. For instance, he might fold the tape measure back on itself, and notice that the frequently-used marks at the low end are further apart than the seldom-used marks at the far end, and then write a chapter in his dissertation about detector linearity. (I just tried this with the cloth tape measure in my sewing kit, and discovered that mine has been cut incorrectly: the first inch is short.) – rob Feb 18 '22 at 21:22
-
You may be interested in Difference between forecasting accuracy and forecasting error? and What are the shortcomings of the Mean Absolute Percentage Error (MAPE)? Both are about percentage errors in the context of forecasting, but there are of course parallels to the context of imprecise measurement. – Stephan Kolassa Feb 19 '22 at 12:00
2 Answers
Prefer “uncertainty” over “error.” When you say “error” you imply that Someone Out There has determined the Right Answer. This isn’t how it works outside of an introductory lab class.
When you say “I’ve measured $x$ with 5% uncertainty,” you are saying something very specific: your result $x=100$ means that another high-quality measurement of $x$ would probably also give a result in the interval $95 < x < 105$.
If you start saying things like “95% accurate,” you are going to confuse people who are listening for a confidence interval, which is another way to analyze uncertainties. Physicists tend to like “one-sigma” confidence intervals, which in your case would mean, roughly,
a repeat of my experiment would have a 68% chance of getting (again) a value in the interval $95 < x < 105$
In other fields, especially the social sciences, people like to report “two-sigma” confidence intervals, which would mean something like
a repeat of my experiment would have a 95% chance of getting (again) a value in the interval $90 < x < 110$
Beware that this description of the confidence interval is specifically listed in the linked encyclopedia article as a misunderstanding (mea culpa). For Gaussian-distributed measurements which all have the same uncertainty, the probability that "your" measurement lies within "my" one-sigma confidence interval is just slightly better than fifty-fifty. The definition of the confidence interval is based on the "true value" of the measured parameter. However, whether that "true value" exists is both a philosophical and a practical question. The world is different from our models of it.
As a commenter says: sometimes you do a measurement and end up with 200% uncertainty, in which case your experiment has not (yet) determined whether your quantity is positive or negative.
So, to your title question: no, don’t do that. If your measurement has 5% uncertainty, communicate this by saying “my measurement has 5% uncertainty.”
- 89,569
-
5I would just like to add to this excellent answer that accuracy has a specific meaning in metrology (the science of measurement). Accuracy says something about the deviation from the "true" value, whereas precision is a measure of experimental reproducibility and control. – Paul Feb 18 '22 at 08:42
-
14Nice answer! Just wanted to say, strictly speaking, a 1-sigma confidence interval doesn't mean there is a 68% chance that future measurements will be in the range $95<x<105$ if you measured $x=100$ on the first trial. It means that if you run the experiment lots of times, on average 68% of the confidence intervals will contain the "true" value. There's no guarantee that any one confidence interval (such as the first one) will be close to the right value. – Andrew Feb 18 '22 at 11:44
-
1Ad 1: In some languages, the words "mistake", "error", and in this context also "uncertainty", are expressed by the same word. That is why we also meet Stack Overflow users sometimes using the word "mistake" for an error message from the program or compiler. – Vladimir F Героям слава Feb 18 '22 at 13:37
-
I have instruments that can sometimes tell you when they're not working. I ended up parsing OP's question as 10% of the measurements are faulted measurements. – Joshua Feb 18 '22 at 22:11
-
i'm going to have to second @Andrew 's comment -- this is a very incorrect interpretation of a CI, even given the caveat of 'roughly' – eps Feb 18 '22 at 23:32
-
to see how absurd such an interpretation could be, imagine you are interested in the US average height and randomly sample 100 people and form a 99% CI. In the (extremely unlikely but still technically possible) event all of your sample contains NBA players, the chance another sample is contained in that CI would be functionally 0%. – eps Feb 18 '22 at 23:46
-
Statistician here. I just joined specifically to upvote @Andrew's comment, which is spot-on. Also, I have to admit I have serious doubts about your point 2., which does not relate to any statistical concept of uncertainty I am familiar with. Then again, this may be a completely correct statement in the context of physics. – Stephan Kolassa Feb 19 '22 at 11:58
-
@StephanKolassa Point 2 is also not really right in analyzing physics data -- actually it is a warning sign in experimental data if all the error bars cover the expected value and if they are too consistent with each other. Basically it means your error bars are too large. – Andrew Feb 19 '22 at 12:36
-
At the level of the question being asked, my goal with this answer was to inform that the language the asker proposes is already occupied by another concept. I was being intentionally non-rigorous, but I crossed the line into "wrong," and I appreciate the comments pointing this out. Rather than loading the answer with enough caveats to make each statement correct, I've added (v3) a warning to read more. I'm still okay with the wording of Point 2, even though the "probably" there is 52% (as edited in); the probability that "your" and "my" error bars overlap is about 84%. – rob Feb 19 '22 at 17:16
You need to define what "error" means; typically it is an estimate of the standard deviation based on a series of measurements. If you take a series of measurements, you can estimate the standard deviation of the population. You can also estimate the mean and the standard deviation of the mean. When you report your result you should report $\mu \pm \sigma_{\mu}$ where $\mu$ is the estimate of the mean from your measurements and $\sigma_{\mu}$ is your estimate of the standard deviation of the mean, not the standard deviation of the population which you can also estimate. See my answer to Uncertainty in ripetitive measurements this exchange for details. If you told me "my result is x with 10% error", without more information I would assume that based on your measurements, x is the mean and 0.1x is the standard deviation of the mean.
You can also establish a confidence interval based on the measurements, and some call that the accuracy. See discussions of confidence interval online or in a statistics text such as Probability and Statistics for the Engineering, Computing, and Physical Sciences by Dougherty.
- 9,351