## III. Theory

Given the pervasiveness of random error, one of the tasks of the experimentalist is to estimate the probability that someone who performs an apparently identical experiment will obtain a different result (measuring the width of the table top mentioned above would be a simple example because, among other things, "width" can't be defined precisely). Such estimates are based upon a mathematical model which fits the vast majority of cases - the Gaussian or "normal" distribution, which looks as follows:

 Caption: Frequency of occurrence of a given x.

This curve is a plot of the function

The function is characterized by two parameters: the mean m, which tells us where the peak of the curve falls along the x axis, and the standard deviation , which tells us how wide the curve is. We imagine that this curve is a good approximation to the one that would be obtained if we plotted the frequency of occurrence of a given value of x upon repeating a given measurement a large number of times.

Given a set of numbers which have been obtained by performing a small number N of identical measurements, we would like to calculate m and . Strictly speaking, this is not possible, for we would have to repeat the experiment an infinite number of times. However, we can obtain an estimate of the mean m by calculating the mean of the numbers we have obtained from some small number of measurements. This estimate, designated M, is given by a familiar procedure:

where the xi represent the experimental values.

Similarly, we can only estimate the standard deviation, and this estimate, designated S, is given by

(Notice that it is not possible to get an estimate of the standard deviation by making only a single measurement.) This last computation looks complicated but, in practice, is simple to perform. For example, suppose we use a meter stick to measure the length of some object. A table of our results might look as follows:

 measurement result (xi - M) (xi - M)2 x1 10.13 cm - 0.10 0.0100 x2 10.24 + 0.01 0.0001 x3 10.09 - 0.14 0.0196 x4 10.41 + 0.18 0.0324 x5 10.26 + 0.03 0.0009 SUM 51.13 0.0630

Thus,

Now we ask the question: "What is the probability that my estimate of the mean, M, based upon a small number of measurements, will fall close to the true mean, m, which is based on a large (ideally, infinite) number of measurements?"  The answer to this question lies in the following observation: if different people performed the same measurement, each would probably obtain a different estimate of the mean and a different estimate of the standard deviation.  However (and this is the crucial point), there would be a narrower spread in the values of these computed means than there was in each of the individual measurements.  In other words, the standard deviation of the computed means is smaller than the standard deviations of the individual measurements.  It is this value - the standard deviation of the means - which gives us the probability we sought in the above question.

Theory tells us that a good estimate of the standard deviation in sample means is . This number is called the standard error (SE) in our measurements.  (In the example above, the standard error is .)  Notice that although our estimate of the mean and standard deviation might not change appreciably as the number of measurements is increased, the standard error tends to diminish.  This estimate of the standard deviation of the means is an approximation that is only valid for small N and does not mean that the standard error goes to zero as N gets large.  You should also realize that the ultimate limit on the smallness of the standard error is determined by instrumental sensitivity.

This standard error can then be used to express our confidence in our estimate of the mean as follows:

If we, or others, repeatedly perform a small number of the similar measurements, then the interval M + SE will overlap the true mean approximately 6 times out of l0. The interval determined by +SE is called a 60% confidence interval. In our example, we are 60% confident that the value 10.23 + 0.06 will overlap the true mean.

If we, or others, repeatedly perform a small number of the similar measurements, then the interval M + 2SE will overlap the true mean approximately 9 times out of l0. The interval +2SE is called a 90% confidence interval. In our example, we are 90% confident that the value 10.2 + 0.1 will overlap the true mean.  Here 2SE = 0.12, which is then rounded to one significant digit.

Note that we are not comparing our measurements to each other; we are always comparing to the true mean which would be found only by doing an infinite number of measurements. If your 90% confidence interval for a measured quantity does not agree with the accepted value, then you should investigate the systematic errors that may have been present in your experiment.

YOU ARE NOW READY TO COMPLETE THE PRE-LAB EXERCISES ON THE WEB. THIS SHOULD BE TURNED IN BEFORE YOU ENTER THE LAB.