PHYSICS-Accuracy, Error, Precision, and Uncertainty
Accuracy, Error,
Precision, and Uncertainty
Introduction
All measurements of physical quantities are subject to uncertainties in the measurements. Variability in the results of repeated measurements arises because variables that can affect the measurement result are impossible to hold constant. Even if the "circumstances," could be precisely controlled, the result would still have an error associated with it. This is because the scale was manufactured with a certain level of quality, it is often difficult to read the scale perfectly, fractional estimations between scale marking may be made and etc. Of course, steps can be taken to limit the amount of uncertainty but it is always there.
In order to interpret data
correctly and draw valid conclusions the uncertainty must be indicated and
dealt with properly. For the result of a measurement to have clear meaning,
the value cannot consist of the measured value alone. An indication of
how precise and accurate the result is must also be included. Thus,
the result of any physical measurement has two essential components:
(1) A numerical value (in a specified system of units) giving the best
estimate possible of the quantity measured, and (2) the degree of
uncertainty associated with this estimated value.
Uncertainty is a parameter characterizing the range of values within which
the value of the measurand can be said to lie within a specified level of
confidence. For example, a measurement of the width of a table
might yield a result such as 95.3 +/- 0.1 cm. This result is basically
communicating that the person making the measurement believe the value to be
closest to 95.3cm but it could have been 95.2 or 95.4cm. The
uncertainty is a quantitative indication of the quality of the result.
It gives an answer to the question, "how well does the result represent
the value of the quantity being measured?"
The full formal
process of determining the uncertainty of a measurement is an extensive
process involving identifying all of the major process and environmental
variables and evaluating their effect on the measurement. This process
is beyond the scope of this material but is detailed in the ISO Guide to the
Expression of Uncertainty in Measurement (GUM) and the corresponding American
National Standard ANSI/NCSL Z540-2. However, there are measures for
estimating uncertainty, such as standard deviation, that are based entirely
on the analysis of experimental data when all of the major sources of
variability were sampled in the collection of the data set.
The
first step in communicating the results of a measurement or group of
measurements is to understand the terminology related to measurement
quality. It can be confusing, which is partly due to some of the
terminology having subtle differences and partly due to the terminology being
used wrongly and inconsistently. For example, the term
"accuracy" is often used when "trueness" should be used.
Using the proper terminology is key to ensuring that results are properly
communicated.
True Value
Since the true value cannot be absolutely determined, in practice an accepted reference value is used. The accepted reference value is usually established by repeatedly measuring some NIST or ISO traceable reference standard. This value is not the reference value that is found published in a reference book. Such reference values are not "right" answers; they are measurements that have errors associated with them as well and may not be totally representative of the specific sample being measured
Accuracy and Error
Accuracy is the closeness of agreement between a measured value and the true value. Error is the difference between a measurement and the true value of the measurand (the quantity being measured). Error does not include mistakes. Values that result from reading the wrong value or making some other mistake should be explained and excluded from the data set. Error is what causes values to differ when a measurement is repeated and none of the results can be preferred over the others. Although it is not possible to completely eliminate error in a measurement, it can be controlled and characterized. Often, more effort goes into determining the error or uncertainty in a measurement than into performing the measurement itself.
The total error is usually a
combination of systematic error and random error. Many times results are
quoted with two errors. The first error quoted is usually the random error,
and the second is the systematic error. If only one error is quoted it is the
combined error.
Systematic error tends to shift all measurements in a systematic way so
that in the course of a number of measurements the mean value is constantly
displaced or varies in a predictable way. The causes may be known or
unknown but should always be corrected for when present. For instance,
no instrument can ever be calibrated perfectly so when a group of
measurements systematically differ from the value of a standard reference specimen,
an adjustment in the values should be made. Systematic error can be
corrected for only when the "true value" (such as the value
assigned to a calibration or reference specimen) is known.
Random error is a component of the total error which, in the course of
a number of measurements, varies in an unpredictable way. It is not
possible to correct for random error. Random errors can occur for a
variety of reasons such as:
Trueness and Bias
Trueness is the closeness of agreement between the average value obtained from a large series of test results and an accepted true. The terminology is very similar to that used in accuracy but trueness applies to the average value of a large number of measurements. Bias is the difference between the average value of the large series of measurements and the accepted true. Bias is equivalent to the total systematic error in the measurement and a correction to negate the systematic error can be made by adjusting for the bias.
Precision,
Repeatability and Reproducibility
Precision is the closeness of agreement between independent measurements of a quantity under the same conditions. It is a measure of how well a measurement can be made without reference to a theoretical or true value. The number of divisions on the scale of the measuring device generally affects the consistency of repeated measurements and, therefore, the precision. Since precision is not based on a true value there is no bias or systematic error in the value, but instead it depends only on the distribution of random errors. The precision of a measurement is usually indicated by the uncertainty or fractional relative uncertainty of a value.
Repeatability is simply the precision determined under conditions where
the same methods and equipment are used by the same operator to make
measurements on identical specimens. Reproducibility is simply
the precision determined under conditions where the same methods but
different equipment are used by different operator to make measurements on
identical specimens.
Uncertainty
Uncertainty is the component of a reported value that characterizes the range of values within which the true value is asserted to lie. An uncertainty estimate should address error from all possible effects (both systematic and random) and, therefore, usually is the most appropriate means of expressing the accuracy of results. This is consistent with ISO guidelines. However, in many measurement situations the systematic error is not address and only random error is included in the uncertainty measurement. When only random error is included in the uncertainty estimate, it is a reflection of the precision of the measurement.
Summary
Error is the difference between the true value of the measurand and the measured value. The total error is a combination of both systematic error and random error. Trueness is the closeness of agreement between the average value obtained from a large series of test results and the accepted true. Trueness is largely affected by systematic error. Precision is the closeness of agreement between independent measurements. Precession is largely affected by random error. Accuracy is an expression of the lack of error. Uncertainty characterizes the range of values within which the true value is asserted to lie with some level of confidence. |