## 3. PROPAGATION OF ERRORS
Once error estimates have been assigned to each piece of data, we must then find out how these errors contribute to the error in the result. The error in a quantity may be thought of as a variation or "change" in the value of that quantity. Results are is obtained by mathematical operations on the data, and small changes in any data quantity can affect the value of a result. We say that "errors in the data propagate through the calculations to produce error in the result."
We first consider how data errors propagate through calculations to affect error limits (or
maximum error) of results. It's easiest to first consider The underlying mathematics is that of "finite differences," an algebra for dealing with numbers that have relatively small variations imposed upon them. The finite differences we are interested in are variations from "true values" caused by experimental errors. Consider a result, R, calculated from the sum of two data quantities A and B. For this discussion we'll use ΔA and ΔB to represent the errors in A and B respectively. The data quantities are written to show the errors explicitly:
A + ΔA and B + ΔB We allow the possibility that ΔA and ΔB may be either positive or negative, the signs being "in" the symbols "ΔA" and "ΔB." The result of adding A and B is expressed by the equation: R = A + B. When errors are explicitly included, it is written: (A + ΔA) + (B + ΔB) = (A + B) + (Δa + Δb) So the result, with its error ΔR explicitly shown in the form R + ΔR, is: R + ΔR = (A + B) + (Δa + Δb)
The error in R is: ΔR = ΔA + ΔB. We conclude that the error in the sum of two quantities is the sum of the errors in those
quantities. You can easily work out the case where the result is calculated from the
Now consider multiplication: R = AB. With errors explicitly included: R + ΔR = (A + ΔA)(B + ΔB) = AB + (ΔA)B + A(ΔB) + (ΔA)(ΔB)
or : ΔR = (ΔA)B + A(ΔB) + (ΔA)(ΔB) This doesn't look like a simple rule. However, when we express the errors in
this does give us a very simple rule: A similar procedure is used for the quotient of two quantities, R = A/B.
The approximation made in the next to last step was to neglect ΔB in the denominator, which is valid when the relative errors are small. So the result is:
A consequence of the product rule is this:
Indeterminate errors have unknown sign. If we assume that the measurements have a symmetric distribution about their mean, then the errors are unbiased with respect to sign. Also, if indeterminate errors in different measurements are independent of each other, their signs have a tendency offset each other when the quantities are combined through mathematical operations. When we are only concerned with In the operation of division, A/B, the worst case deviation of the result occurs when the errors in the numerator and denominator have opposite sign, either +ΔA and -ΔB or -ΔA and +ΔB. In either case, the maximum size of the relative error will be (ΔA/A + ΔB/B). The results for addition and multiplication are the same as before. In summary,
A consequence of the product rule is this:
These rules It can be shown (but not here) that these rules also apply sufficiently well to errors expressed as average deviations. One drawback is that the error estimates made this way are still overconservative. They do not fully account for the tendency of error terms associated with independent errors to offset each other. This, however, is a minor correction, of little importance in our work in this course. Error propagation rules may be derived for other mathematical operations as needed. For example, the rules for errors in trigonometric functions may be derived by use of the trigonometric identities, using the approximations: sin θ ≈ θ and cos θ ≈ 1, valid when θ is small enough. Rules for exponentials may also be derived. When mathematical operations are combined, the rules may be successively applied to
each operation. In this way an equation may be algebraically derived that expresses the
error in the result in terms of errors in the data. Such an equation can always be cast into
ΔR = (c
which may always be algebraically rearranged to:
The coefficients {c If this error equation is derived from the If this error equation is derived from the The indeterminate error equation may be obtained directly from the determinate error
equation by simply choosing the "worst case," i.e., by taking the absolute value of every term.
This forces all terms to be positive. This step should only be done The error equation in standard form is one of the most useful tools for experimental design and analysis. It should be derived (in algebraic form) even before the experiment is begun, as a guide to experimental strategy. It can show which error sources dominate, and which are negligible, thereby saving time you might otherwise spend fussing with unimportant considerations. It can suggest how the effects of error sources may be minimized by appropriate choice of the sizes of variables. It can tell you how good a measuring instrument is needed to achieve a desired accuracy in the results. The student who neglects to derive and use this equation may spend an entire lab period
using instruments, strategy, or values insufficient to the requirements of the experiment. The
student may have no idea A final comment for those who wish to use standard deviations as indeterminate error
measures: Since the standard deviation is obtained from the average of
(r/R) This rule is given here without proof. This method of combining the error terms is called "summing in quadrature."
The physical laws one encounters in elementary physics courses are expressed as equations, and these are combinations of the elementary operations of addition, subtraction, multiplication, division, raising to powers, etc. Laboratory experiments often take the form of verifying a physical law by measuring each quantity in the law. If the measurements agree within the limits of error, the law is said to have been verified by the experiment. For example, a body falling straight downward in the absence of frictional forces is said to obey the law:
where s is the distance of fall, vo is the initial speed, t is the time of fall and a is the acceleration. In this case, a is the acceleration due to gravity, g, which is known to have a constant value of about 980 cm/sec2, depending on latitude and altitude. More precise values of g are available, tabulated for any location on earth. There's a general formula for g near the earth, called Helmert's formula, which can be found in the Handbook of Chemistry and Physics. The student might design an experiment to verify this relation, and to determine the value of g, by measuring the time of fall of a body over a measured distance. One simplification may be made in advance, by measuring s and t from the position and
instant the body was at rest, just as it was released and began to fall. Then v
The student will, of course, repeat the experiment a number of times to obtain the average time of fall. The average values of s and t will be used to calculate g, using the rearranged equation:
The experimenter used Let fs and ft represent the fractional errors in t and s. Similarly, fg will represent the fractional error in g. The number "2" in the equation is not a measured quantity, so it is treated as error-free, or exact. So the fractional error in the numerator of Eq. 11 is, by the product rule:
f since f The fractional error in the denominator is, by the power rule, 2f
f which we have indicated, is also the fractional error in g. The absolute error in g is:
Δg = g f
Equations like 3-11 and 3-13 are called Some students prefer to express fractional errors in a quantity Q in the form ΔQ/Q. Using this style, our results are:
In this experiment we can recognize possible sources of determinate error: reaction time in using a stopwatch, stretch of the string used to measure the distance of fall. But, if you recognize a determinate error, you should take steps to eliminate it before you take the final set of data. Indeterminate errors show up as a scatter in the independent measurements, particularly in the time measurement. The experimenter must examine these measurements and choose an appropriate estimate of the amount of this scatter, to assign a value to the indeterminate errors. Then, these estimates are used in an
(1) Two data quantities, X and Y, are used to calculate a result, R = XY. X = 38.2 ± 0.3 and Y = 12.1 ± 0.2. What is the error in R? Solution: First calculate R without regard for errors: R = (38.2)(12.1) = 462.22 The product rule requires fractional error measure. The fractional error in X is 0.3/38.2 = 0.008 approximately, and the fractional error in Y is 0.017 approximately. Adding these gives the fractional error in R: 0.025. Multiplying this result by R gives 11.56 as the absolute error in R, so we write the result as R = 462 ± 12. Note that once we know the error, its size tells us how far to round off the result (retaining the first uncertain digit.) Note also that we round off the error itself to one, or at most two, digits. This is why we could safely make approximations during the calculations of the errors. This result is the same whether the errors are determinate or indeterminate, since no negative terms appeared in the determinate error equation. (2) A quantity Q is calculated from the law: Q = (G+H)/Z, and the data is:
G = 20 ± 0.5 The calculation of Q requires both addition and division, and gives Q = 0.340. The error calculation therefore requires both the rule for addition and the rule for division, applied in the same order as the operations were done in calculating Q. First, the addition rule says that the absolute errors in G and H add, so the error in the numerator (G+H) is 0.5 + 0.5 = 1.0. Therefore the fractional error in the numerator is 1.0/36 = 0.028. The fractional error in the denominator is 1.0/106 = 0.0094. The fractional determinate error in Q is 0.028 - 0.0094 = 0.0186, which is 1.86%. The absolute fractional determinate error is (0.0186)Q = (0.0186)(0.340) = 0.006324. We quote the result in standard form: Q = 0.340 ± 0.006. If we knew the errors were
(3.1) Devise a non-calculus proof of the product rules. (3.2) Devise a non-calculus proof of the quotient rules. Do this for the indeterminate error rule and the determinate error rule. Hint: Take the quotient of (A + ΔA) and (B - ΔB) to find the fractional error in A/B. Try all other combinations of the plus and minus signs. (3.3) The mathematical operation of taking a difference of two data quantities will often give very much larger fractional error in the result than in the data. Why can this happen? Does it follow from the above rules? Under what conditions does this generate very large errors in the results? (3.4) Show by use of the rules that the maximum error in the average of several quantities is the same as the maximum error of each of the individual quantities. This reveals one of the inadequacies of these rules for maximum error; there seems to be no advantage to taking an average. But more will be said of this later.
Rules have been given for addition, subtraction, multiplication, and division. Raising to a power was a special case of multiplication. You will sometimes encounter calculations with trig functions, logarithms, square roots, and other operations, for which these rules are not sufficient. The calculus treatment described in chapter 6 works for The trick lies in the application of the general principle implicit in all of the previous discussion, and specifically used earlier in this chapter to establish the rules for addition and multiplication. This principle may be stated:
Experimental investigations usually require measurement of a number of
When errors are independent, the mathematical operations leading to the result tend to average out the effects of the errors. This makes it less likely that the errors in results will be as large as predicted by the maximum-error rules. A simple modification of these rules gives more realistic predictions of size of the errors in results. These modified rules are presented here without proof. They are, in fact, somewhat arbitrary, but do give realistic estimates that are easy to calculate. The previous rules are modified by replacing "sum of" with "square root of the sum of the squares of." Instead of summing, we "sum in quadrature." This modification is used only when dealing with indeterminate errors, so we restate the modified indeterminate error rules:
Raising a number to a power might seem to be simply a case of multiplication: A
(3.11) What is the fractional indeterminate error in A (3.12) What is the fractional indeterminate error in 3A? (The number 3 is error-free).
As an example of these rules, let's reconsider the case of averaging several quantities.
We previously stated that the process of averaging did not reduce the size of the error. Now that we recognize that repeated measurements are Suppose n measurements are made of a quantity, Q. The fractional error may be assumed to be nearly the same for all of these measurements. Call it f. Then our data table is: Q ± fQ 1 1 Q ± fQ 2 2 .... Q ± fQ 3 3 The first step in taking the average is to add the Qs. The error in the sum is given by the modified sum rule:
But each of the Qs is nearly equal to their average, <Q>, so the error in the sum is:
Error in sum = (f√n)<Q> . The next step in taking the average is to divide the sum by n. There is no error in n (counting is one of the few measurements we can do perfectly.) So the fractional error in the quotient is the same size as the fractional error in the numerator.
Therefore, the fractional error in an average
(3.13) Derive an expression for the fractional and absolute error in an average of n
measurements of a quantity Q when each measurement of Q has a © 1996, 2004 by Donald E. Simanek. |