# Errors, Theory of

## Errors, Theory of

the branch of mathematical statistics concerned with the analysis of measurement errors and with the refinement of the numerical values of approximately measured quantities. Repeated measurements of a constant quantity generally yield different results, since each measurement contains some error. There are three basic kinds of errors: systematic errors, blunders, and random errors. Systematic errors consistently either exaggerate or understate the results of measurements and have specific causes (the measuring instruments are improperly set up, an environmental factor) that systematically influence and alter the results of the measurements. Such errors are estimated by methods that lie beyond the purview of mathematical statistics. Errors in counting and the incorrect reading of instruments are among the causes of blunders. The results of measurements involving blunders differ markedly from other measurement results and therefore are often easily noticed. Random errors stem from various random factors that act in an unforeseen manner during each individual measurement—sometimes decreasing and sometimes increasing the results.

The theory of errors deals only with blunders and random errors. Its basic problems are the ascertainment of the laws of distribution of random errors; establishment, based on measurement results, of estimates of the unknown quantities being measured; determination of the errors in the estimates; and elimination of blunders.

Let the values *x*_{1}, *x*_{2}, …, *x _{n}* be obtained from n independent, equally accurate measurements of the quantity

*a*. The differences

*δ*_{1} = *x*_{1} – *a*, …, *δ _{n}* =

*x*–

_{n}*a*

are called the true errors. In terms of the probabilistic theory of errors, the *δ _{i}* are interpreted as random quantities; the independence of the measurements is taken to mean the mutual independence of the random quantities

*δ*

_{1},…,

*δ*. The equal accuracy of the measurements is interpreted in the broad sense as identical distribution; that is, the true errors of equally accurate measurements are identically distributed random quantities. The mathematical expectation of the random errors

_{n}*b*= E

*δ*

_{1}= … = E

*δ*is called the systematic error, and the differences

_{n}*δ*

_{1}–

*b*, …,

*δ*–

_{n}*b*are called random errors. Thus, the absence of a systematic error means that

*b*= 0 and

*δ*

_{1}, …,

*δ*are random errors. The quantity , where

_{n}*σ*is the root-mean-square deviation, is called the index of precision. When a systematic error is present, the index of precision is expressed by the ratio . In the narrow sense, equal accuracy of measurements means that the index of precision is identical for all measurement results. The presence of blunders indicates that equal accuracy in both the narrow and the broad senses has been violated for some individual measurements. The arithmetic mean of the results of the measurements

is usually selected as an estimate of the unknown quantity *a*, and the differences *Δ*_{1} = *x*_{1} – *x̄*, …, *Δ _{n}* =

*x*–

_{n}*x̄*are called the apparent errors. The selection of

*x̄*as an estimate for

*α*is based on the law of large numbers: when the number

*n*of equally accurate measurements lacking a systematic error is sufficiently large, the estimate

*x*differs arbitrarily little from the unknown quantity

*α*with a probability arbitrarily close to unity. The estimate

*x*lacks systematic error—estimates with this property are said to be unbiased. The deviation of the estimate is

*Dx̄ = E(x̄ – α*)^{2} = *σ*^{2}*n*

Experience shows that the random errors *σ _{i}* often conform to distributions close to the normal distribution; the reasons for this are given by the limit theorems of probability theory. In this case, the quantity

*x*has a distribution that differs little from a normal distribution with a mathematical expectation

*α*and deviation

*σ*

^{2}/

*n*. If the distributions of

*σ*are precisely normal, then the deviation of any other unbiased estimate for

_{i}*α*, such as a median, is at least D

*x*. But this property does not hold if the distribution of

*σ*is non-normal.

_{i}If the deviation *σ*^{2} of the individual measurements is not known in advance, it can be estimated by the quantity

Es^{2} = *σ*^{2}; that is, *s*^{2} is an unbiased estimate for *σ*^{2}. If the random errors *δ _{i}* have a normal distribution, the relation

obeys Student’s distribution with *n* – 1 degrees of freedom. This fact can be used to estimate the error of the approximate equality *a* ≈ *x̄*.

On the same assumptions, the quantity (*n* – 1) *s*^{2}/*σ*^{2} has the chi-squared distribution with *n* – 1 degrees of freedon. This fact permits estimation of the error of the approximate equality *σ* ≈ *s*. It can be shown that the relative error ǀ*s* – *σ*ǀ/*s* will not exceed the number *q* with the probability

*ω = F(z _{2}, n – 1) – F(z_{1} n – 1*)

where *F* (*z, n* – 1) is a function of the chi-squared distribution and

### REFERENCES

Linnik, Iu. V.*Metod naimen’shikh kvadratov*. 2nd ed. Moscow, 1962.

*i*osnovy matematiko-statisticheskoi teorii obrabotki nabliudeniiBol’shev, L. N., and N. V. Smirnov.

*Tablitsy matematicheskoi statistiki*, 2nd ed. Moscow, 1968.

L. N. BOL’SHEV