# Chi-Square Distribution

(redirected from*"Chi-squared" distribution*)

Also found in: Dictionary, Medical.

## chi-square distribution

[′kī ¦skwer dis·trə′byü·shən]## Chi-Square Distribution

The probability distribution of the sum

of the squares of the normally distributed random variables X_{1}, . . ., *X _{f}*, with zero mathematical expectation and unit variance is known as a chi-square distribution with

*f*degrees of freedom. The distribution function for a chi-square random variable is

The first three moments (the mathematical expectation, the variance, and the third central moment) of χ^{2} are *f*, 2*f*, and 8*f*, respectively. The sum of two independent random variables and with *f*_{1} and *f*_{2} degrees of freedom has a chi-square distribution with *f*_{1} + *f*_{2} degrees of freedom.

Examples of chi-square distributions are the distributions of the squares of random variables that obey the Rayleigh and Maxwellian distributions. The Poisson distribution can be expressed in terms of a chi-square distribution with an even number of degrees of freedom:

If the number *f* of terms of the sum χ^{2} increases without bound, then, according to the central limit theorem, the distribution function of the standardized ratio converges to the standard normal distribution:

where

A consequence of this fact is another limit relation, which is convenient for calculating *F _{f}*(

*x*) when

*f*has large values:

In mathematical statistics, the chi-square distribution is used to construct interval estimates and statistical tests. Let *Y _{i}*, . . .,

*Y*be random variables representing independent measurements of an unknown constant

_{n}*a*. Suppose the measurement errors

*Y*–

_{i}*a*are independent and are distributed identically normally. We have

**E**(*Y _{i}* –

*a*) = 0

**E**(

*Y*–

_{i}*a*)

^{2}= σ

^{2}

The statistical estimate of the unknown variance σ^{2} is then expressed by the equation

*s*^{2} = *S*^{2}/(*n* – 1)

where

The ratio *S ^{2}/σ^{2}* obeys a chi-square distribution with

*f*=

*n*– 1 degrees of freedom. Let

*x*

_{1}and

*x*

_{2}be positive numbers that are solutions of the equations

*F*(

_{f}*x*

_{1}) = α/2 and

*F*(

_{f}*x*

_{2}) = 1 – α/2, where α is a specified number in the interval (0,1/2). In this case

**P**{*x*_{1} < *S*^{2}/σ^{2} < *x*_{2}} = **P**{*S*^{2}/*x*_{2} < σ^{2} < *S*^{2}/*x*_{1}} = 1 – α

The interval (*S*^{2}/*x*_{1}, *S*^{2}/*x*_{2} is called the confidence interval for σ^{2} with confidence coefficient 1 – α.

This method of constructing an interval estimate for σ^{2} is often used to test the hypothesis that , where is a given number. Thus, if belongs to the confidence interval indicated, then one concludes that the measurements do not contradict the hypothesis . If, however, or , then it must be assumed that or , respectively. This test corresponds to a significance level equal to α.

### REFERENCE

Cramer, H.*Matematicheskie metody statistiki*, 2nd ed. Moscow, 1975. (Translated from English.)

L. N. BOL’SHEV