II. Introduction to Terms

In the laboratory neither the measuring instrument nor the measuring procedure is ever perfect; consequently, every experiment is subject to experimental error. A reported result which does not include the experimental error is seriously incomplete.

Experimental errors are generally classified under two broad categories - systematic errors and random errors.

Systematic errors include errors due to the calibration of instruments and errors due to faulty procedures. When reporting results in scientific journals, one must often take great pains to assure that one's meter sticks and clocks, for example, have been accurately calibrated against international standards of length and time. However, even if an instrument has been properly calibrated, it can still be used in a fashion which leads to systematically wrong results.

Random errors include errors of judgment in reading a meter or a scale and errors due to fluctuating experimental conditions. In addition to environmental fluctuations (e.g., the temperature of the laboratory or the value of the line voltage), there will be fluctuations caused by the fact that many experimental parameters are not exactly defined. For example, the width of a table top might be said to be 1 meter, but close examination would show that opposite edges are not precisely parallel and a microscopic examination would reveal that the edges are quite rough. How, then, can we even define what we mean by the width of the table?

When the systematic errors in an experiment are small, the experiment is said to be accurate.

When the random errors in an experiment are small, the experiment is said to be precise.

Since random errors are unavoidable, we should never use a measuring instrument so crude that these random errors go undetected. We should always use a measuring instrument which is sufficiently sensitive so that duplicate measurements do not yield duplicate results. Correct procedure requires that you report your readings of scales and meters by estimating to one tenth of the smallest scale division (excluding, of course, those cases in which the instrument is designed to give digital readout or when the measuring instrument is so crude that this estimation is meaningless). The sensitivity of your measuring instrument determines the ultimate precision of any measurement.