The material of this chapter is intended for the student who has familiarity with calculus concepts and certain other mathematical techniques. In particular, we will assume familiarity with:

  1. Functions of several variables.
  2. Evaluation of partial derivatives, and the chain rules of differentiation.
  3. Manipulation of summations in algebraic context.

At this mathematical level our presentation can be briefer. We can dispense with the tedious explanations and elaborations of previous chapters.


If a result R = R(x,y,z) is calculated from a number of data quantities, x, y and z, then the relation:


    ∂R ∂R ∂R dR = —— dx + —— dy + —— dz ∂x ∂y ∂z

holds. This is one of the "chain rules" of calculus. This equation has as many terms as there are variables.

Then, if the fractional errors are small, the differentials dR, dx, dy and dz may be replaced by the absolute errors ΔR, Δx, Δy, and Δz, and written:


    ∂R ∂R ∂R ΔR ≈ —— Δx + —— Δy + —— Δz ∂x ∂y ∂z

Strictly, this is no longer an equality, but an approximation to DR, since the higher order terms in the Taylor expansion have been neglected. So long as the errors are of the order of a few percent or less, this will not matter. This equation is now an error propagation equation.


Finally, divide equation (6.2) by R:

    ΔR x ∂R Δx y ∂R Δy z ∂R Δz —— = — —— —— + — —— —— + — —— —— R R ∂x x R ∂y y R ∂z z

The factors of the form Δx/x, Δy/y, etc are relative (fractional) errors. This equation shows how the errors in the result depend on the errors in the data. Eq. 6.2 and 6.3 are called the standard form error equations. They are also called determinate error equations, because they are strictly valid for determinate errors (not indeterminate errors). [We'll get to indeterminate errors soon.]

The coefficients in Eq. 6.3 of the fractional errors are of the form [(x/R)(∂R/dx)]. These play the very important role of "weighting" factors in the various error terms. At this point numeric values of the relative errors could be substituted into this equation, along with the other measured quantities, x, y, z, to calculate ΔR.

Notice the character of the standard form error equation. It has one term for each error source, and that error value appears only in that one term. The error due to a variable, say x, is Δx/x, and the size of the term it appears in represents the size of that error's contribution to the error in the result, R. The relative sizes of the error terms represent the relative importance of each variable's contribution to the error in the result.

This equation clearly shows which error sources are predominant, and which are negligible. This can aid in experiment design, to help the experimenter choose measuring instruments and values of the measured quantities to minimize the overall error in the result.

The determinate error equation may be developed even in the early planning stages of the experiment, before collecting any data, and then tested with trial values of data.

The coeficients in each term may have + or - signs, and so may the errors themselves.

The standard form error equations also allow one to perform "after-the-fact" correction for the effect of a consistent measurement error (as might happen with a miscalibrated measuring device).

Example 1: If R = X1/2, how does dR relate to dX?

    1 -1/2 dX dR = — X dX, which is dR = —— 2 √X

divide by the square root of X

    dR dR 1 dX —— = —— = — —— √X R 2 X

which shows that the fractional error in the square root of X is half of the fractional error in X.

Example 2: If R = XY, how does dR relate to dX and dY?

    ∂R ∂R —— = Y, —— = X so, dR = YdX + XdY ∂X ∂Y

then, dividing by XY gives

    dR dR dX dY —— = —— = —— + —— XY R X Y

which agrees with the product rule previously given.

In many error calculations, especially those involving powers, products or quotients, it is convenient to take the logarithm of the expression before taking the differentials.

Example 3: Do the last example using the logarithm method.

log R = log X + log Y

Take differentials.

    dR dX dY —— = —— + —— R X Y

This saves a few steps.

Example 4: R = x2y3.

logR = 2 log(x) + 3 log(y)

    dR dx dy —— = 2 —— + 3 —— R x y

Example 5: R = sin(θ)

dR = cos(θ)dθ

Or, if the relative error is wanted,

    dR dx x dx —— = —————— = —————— —— R tan(θ) tan(θ) θ


(3.7) The phase velocity of sound in a string is given in terms of the tension, T, and mass per unit length, m, by

    U = (T/M)1/2
Find the error in U in terms of the errors in T and m.

(3.8) The index of refraction of a prism is given by:

    n = sin[(A+D)/2]/sin[A/2]

where A is the prism angle and D is the angle of deviation of a light ray passing through the prism. Find an expression for the absolute error in n.

(3.9) The focal length, f, of a lens if given by:

    1 1 1 — = — + — f p q

where p and q are the object and image distances. Write an expression for the fractional error in f. When is this error largest? When is it least?


The use of the chain rule described in section 6.2 correctly preserves relative signs of all quantities, including the signs of the errors. It is therefore appropriate for determinate (signed) errors.

Indeterminate errors have indeterminate sign, and their signs are as likely to be positive as negative. The equations resulting from the chain rule must be modified to deal with this situation:

(1) The signs of each term of the error equation are made positive, giving a "worst case" result. This modification gives an error equation appropriate for errors expressed as maximum error, or limits of error.

(2) The terms of the error equation are added in quadrature, to take account of the tendency of independent errors' signs to offset each other. This modification gives an error equation appropriate for error estimates expressed as average deviations or standard deviations.

The "worst case" is rather unlikely, especially if many data quantities enter into the calculations. The variations in independently measured quantities have a tendency to offset each other, and the best estimate of error in the result is smaller than the "worst-case" limits of error. Statistical theory provides ways to account for this tendency of "random" data. These methods build upon the "least squares" principle and are strictly applicable to cases where the errors have a nearly-Gaussian distribution.

    Legendre's principle of least squares asserts that the curve of "best fit" to scattered data is the curve drawn so that the sum of the squares of the data points' deviations from the curve is smallest. See SEc. 8.2 (3).

In such cases, the appropriate error measure is the standard deviation. The equation for propagation of standard deviations is easily obtained by rewriting the determinate error equation. Just square each error term; then add them. The result is the square of the error in R:

    This procedure is not a mathematical derivation, but merely an easy way to remember the correct formula for standard deviations by relating it in a simple manner to the previousoly derived formula for maximum errors. Also, the reader should understand that all of these equations are approximate, appropriate only to the case where the relative error sizes are small.


The error measures, Δx/x, etc. are now interpreted as standard deviations, s, therefore the error equation for standard deviations is:


This method of combining the error terms is called "summing in quadrature."


(6.6) What is the fractional error in A-n in terms of the fractional error in A?

(6.7) What is the fractional error in AA (A raised to the power A)?

(6.8) What is the fractional error in 3A? (The number 3 has no error.)

(6.9) Derive an expression for the fractional and absolute error in an average of n measurements of a quantity Q when each measurement of Q has a different fractional error. The result is most simply expressed using summation notation, designating each measurement by Qi and its fractional error by fi.


When the calculated result depends on a number of independently measured quantities, with a number of independent trials for each measurement, the propagation rules of section 6.4 are appropriate.

Often some errors dominate others. Consider the multiplication of two quantities, one having an error of 10%, the other having an error of 1%. The error in the product of these two quantities is then:

    √(102 + 12) = √(100 + 1) = √101 = 10.05 .

If two errors are a factor of 10 or more different in size, and combine by quadrature, the smaller error has negligible effect on the error in the result. In such instances it is a waste of time to carry out that part of the error calculation.

In such cases the experimenter should consider whether experiment redesign, or a different method, or better procedure, might improve the results. Especially if the error in one quantity dominates all of the others, steps should be taken to improve the measurement of that quantity. Conversely, it is usually a waste of time to try to improve measurements of quantities whose errors are already negligible compared to others.


We said that the process of averaging should reduce the size of the error of the mean. That is, the more data you average, the better is the mean. We are now in a position to demonstrate under what conditions that is true.

    We are using the word "average" as a verb to describe a process. The result of the process of averaging is a number, called the "mean" of the data set. The term "average deviation" is a number that is the measure of the dispersion of the data set. Sometimes "average deviation" is used as the technical term to express the the dispersion of the parent distribution.

THEOREM 1: The error in an mean is not reduced when the error estimates are average deviations.


The mean of n values of x is:

The average deviation of the mean is:

The average deviation of the mean is obtained from the propagation rule appropriate to average deviations:

THEOREM 2: The error in the mean is reduced by the factor 1/√n . The error estimate is obtained by taking the square root of the sum of the squares of the deviations.


The mean of n values of x is:

Let the error estimate be the standard deviation. Such errors propagate by equation 6.5:

Clearly any constant factor placed before all of the standard deviations "goes along for the ride" in this derivation. Therefore the result is valid for any error measure which is proportional to the standard deviation.

© 1996, 2004 by Donald E. Simanek.