regression analysis

(redirected from Ordinary least squares regression)
Also found in: Dictionary, Thesaurus, Medical, Financial, Acronyms.

regression analysis

[ri′gresh·ən ə‚nal·ə·səs]
(statistics)
The description of the nature of the relationship between two or more variables; it is concerned with the problem of describing or estimating the value of the dependent variable on the basis of one or more independent variables.

Regression Analysis

 

the branch of mathematical statistics that encompasses practical methods of studying a regression relation between variables on the basis of statistical data. The purposes of regression analysis include the determination of the general form of a regression equation, the construction of estimates of unknown parameters occurring in a regression equation, and the testing of statistical regression hypotheses.

When the relationship between two variables is studied on the basis of the observed values (x1, y1),…, (xn, yn) in accordance with regression theory, it is assumed that one of the variables, Y, has a certain probability distribution when the value of the other variable is fixed as x. This probability distribution is such that

E(Yǀx) = g(x, β)

D(Yǀx) = σ2h2(x)

where β denotes the set of unknown parameters that determine the function g(x), and h(x) is a known function of x—for example, it may have the constant value one. The choice of a regression model is determined by the assumptions regarding the form of the dependence of g(x, β) on x and β. The most natural model, from the standpoint of a unified method of estimating the unknown parameters β, is the regression model that is linear in β:

g(x, β) = β0g0(x) + … + βkgk(x)

Different assumptions may be made regarding the values of the variable x, depending on the nature of the observations and the aims of the analysis. In order to establish the relationship between the variables in an experiment, a model is used that is based on simplified but plausible assumptions. These assumptions are that the variable x is a controllable variable, whose value is assigned during the design of the experiment, and that the observed values of y are expressed in the form

yi = g(xi, β) + i i = 1,…,k

where the quantities i, describe the errors. The errors are assumed to be independent under different measurements and to be identically distributed with zero mean and constant variance σ2. The case where x is an uncontrollable variable differs in that the observed values (x1, y1),…. (xn, yn) constitute a sample from a bivariate population. In either case, the regression analysis is performed in the same way. The interpretation of the results, however, is done substantially differently. If both the variables are random, the relationship between them is studied by the methods of correlation analysis.

A preliminary idea of the nature of the relation between g(x) and x can be obtained by plotting the points (xi, ȳ(xi) in a scatter diagram, which is also called a correlation field when both variables are random. The (xi) are the arithmetic means of the values of y that correspond to a fixed value xi. For example, if the points fall near a straight line, a linear regression can be used as the approximation.

The standard method of estimating the regression line is based on the polynomial model (m ≥ 1):

y(x, β) = β0 + β1x = … + βmxm

One reason for the choice of this model is that every function continuous over some interval can be approximated by a polynomial to any desired degree of accuracy. The unknown regression coefficients β0,…, βm and the unknown variance σ2 are estimated by the method of least squares. The estimates β0,…, β̂0 of the parameters β0, …, βm obtained by this method are called the sample regression coefficients, and the equation

ŷ(x) = β̂0 + … + βmxm

defines what is called the sample regression line. If the observed values are assumed to be normally distributed, this method leads to estimates of β0,…, βm and of σ2 that coincide with estimates obtained by the maximum likelihood method. The estimates obtained by the least squares method are in some sense best estimates even when the distribution is not normal. Thus, if a linear regression hypothesis is to be tested,

where and are the arithmetic means of the xi and yi. The estimate ĝ(x) = β̂0 + β1(x) is an unbiased estimate of g(x); its variance is less than the variance of any other linear estimate. The assumption that the yi have a normal distribution is the most effective method of checking the accuracy of the constructed sample regression equation and of testing the hypotheses on the parameters of the regression model. In this case, the construction of the confidence intervals for the true regression coefficients β0,…, βm and the testing of the hypothesis that no regression relationship exists (βi = 0, i = 1,…, m) are carried out by means of Student’s distribution.

In a more general situation, the observed values y1,…,yn are regarded as values of independent random variables with identical variances and the mathematical expectations

Eyi = βi xu+ … + βkxki i = 1…,n

where the values of the Xji, j = 1,…, k, are assumed known. This form of linear regression model is general in the sense that higher-order models in the variables x1, …, xk reduce to it. Moreover, certain models that are nonlinear in β can also be reduced to this linear form by a suitable transformation.

Regression analysis is one of the most widespread methods of processing the results of observations made during the study of relationships in such fields as physics, biology, economics, and engineering. Such branches of mathematical statistics as analysis of variance and the design of experiments are also based on regression analysis. Regression analysis models are widely used in multivariate statistical analysis.

REFERENCES

Yule, G. U., and M. G. Kendall. Teoriia statisliki, 14th ed. Moscow, 1960. (Translated from English.)
Smirnov, N. V., and I. V. Dunin-Barkovskii. Kurs teorii veroiatnostei i matematicheskoi statisliki dlia tekhnicheskikh prilozhenii, 3rd ed. Moscow, 1969.
Aivazian, S. A. Statislicheskoe issledovanie zavisimostei. Moscow, 1968.
Rao, C. R. Lineinye statisticheskie metody i ikh primeneniia. Moscow, 1968. (Translated from English.)

A. V. PROKHOROV

regression analysis

In statistics, a mathematical method of modeling the relationships among three or more variables. It is used to predict the value of one variable given the values of the others. For example, a model might estimate sales based on age and gender. A regression analysis yields an equation that expresses the relationship. See correlation.
References in periodicals archive ?
Figure 4 represents the results obtained from applying the linear ordinary least squares regression statistical technique (p-value and correlation coefficient r) between outsourcing index (OT, Table 4) of components manufacturers and purchases of raw materials, merchandise, external jobs and external services.
Ordinary least squares regression yielded a standardized number for each variable, the beta coefficient, that was convenient to use in comparing the effects of variables and in clustering decision-making policies.
Quantile regressions are very different from ordinary least squares regression and is best explained by analogy: regular ordinary least squares summarizes how the mean value of the dependent variable varies with some X regressor, whereas quantile regressions summarize how some quantiles (that is, median or 10th percentile) of the dependent variable vary with some X regressor.
Dipyridamole expenditures (dollars reimbursed to pharmacies by Medicaid) for each study patient during the six-month preintervention and the six-month postintervention periods were collected from Medicaid prescription drug claims.(2) A multiple regression model was developed and ordinary least squares regression was employed to estimate the effect of the letter intervention in both the long-term care and ambulatory patient populations.
Logistic regression was chosen in preference to ordinary least squares regression because it is appropriate for a dichotomous dependent variable.
The discussion is limited to (1) the analysis of data by the analytic procedures of ordinary least squares regression, discriminant analysis, or logistic regression; (2) the use of the Statistical Package for the Social Sciences (registered) computer software; and (3) a dependent variable consisting of two groups.
Ayanian studies the Fed real exchange rate index for the period 1973-85 (on an annual basis) as the independent variable of a simple ordinary least squares regression. Various dependent variables are considered among which are, crucially, the United States defense budget as a percent of GNP and the federal budget deficit as a percentage of GNP.
The model was estimated both with maximum likelihood and with ordinary least squares regression methods.
Ordinary least squares regression was used to build two annual earnings models--one for the subsample that was primarily in the workforce (the W population) and the other for the sample that was primarily in postsecondary education (the PSE population).
The second example involves matrix inversion and ordinary least squares regression.