Estimator


Also found in: Dictionary, Thesaurus, Medical, Idioms, Wikipedia.
Related to Estimator: Unbiased estimator

estimator

[′es·tə‚mād·ər]
(statistics)
A random variable or a function of it used to estimate population parameters.

Estimator

 

in statistics, a function of the results of observations that is used to estimate an unknown parameter of the probability distribution of random variables that are under study. In English, a distinction is sometimes, but not always, made between the terms “estimator” and “estimate”: an estimate is the numerical value of the estimator for a particular sample.

Suppose, for example, that X1, . . . , Xn are independent random variables having the same normal distribution with the unknown mean a. Possible point estimators of a are the arithmetic mean of the observation results

and the sample median μ = μ(X1,..., Xn).

In choosing an estimator of a parameter θ, it is natural to select a function θ*(X1, . . . , Xn) of the observation results X1, . . . , Xn that is in some sense close to the true value of the parameter. By adopting some measure of the closeness of an estimator to the parameter being estimated, different estimators can be compared with respect to quality. A commonly used measure of closeness is the magnitude of the mean squared error

Eθ(θ* – θ)2 = Dθθ* + (θ - Eθθ*)2

which is expressed here in terms of the mathematical expectation Eθθ* and variance Dθθ* of the estimator.

The estimator θ* is said to be unbiased if Eθθ* = θ. In the class of all unbiased estimators, the best estimators from the standpoint of mean squared error are those that have for a given n the minimum possible variance for all θ. The estimator X̄ defined above for the parameter a of a normal distribution is the best unbiased estimator, since the variance of any other unbiased estimator a* of a satisfies the inequality Daa* DaX̄ = σ2/n, where σ2 is the variance of the normal distribution. If a minimum-variance unbiased estimator exists, an unbiased best estimator can also be found in the class of functions that depend only on a sufficient statistic.

In constructing estimators for large n, it is natural to assume that as n → ∞, the probability of deviations of θ* from the true value of θ that exceed some given number will be close to θ. Estimators with this property are said to be consistent. Unbiased estimators whose variance approaches θ as n→ ∞ are consistent. Because the rate at which the limit is approached plays an important role here, an asymptotic comparison of two estimators is made by considering the ratio of their asymptotic variances. In the example given above, the arithmetic mean X̄ is the best, and consequently the asymptotically best, estimator for the parameter a, whereas the sample median μ, although an unbiased estimator, is not asymptotically best, since

Nonetheless, the use of μ sometimes has advantages. If, for example, the true distribution is not exactly normal, the variance of X̄ may increase sharply while the variance of μ remains almost the same—that is, μ has the property known as robustness.

A widely used general method of obtaining estimators is the method of moments. In this technique, a certain number of sample moments are equated to the corresponding moments of the theoretical distribution, which are functions of the unknown parameters, and the equations obtained are solved for these parameters. The method of moments is convenient to use, but the estimators produced by it are not in general asymptotically best estimators. From the theoretical point of view, the maximum likelihood method is more important. It yields estimators that, under certain general conditions, are asymptotically best. The method of least squares is a special case of the maximum likelihood method.

An important supplement to the use of estimators is provided by the estimation of confidence intervals.

REFERENCES

Kendall, M., and A. Stuart. Statisticheskie vyvody i sviazi. Moscow, 1973. (Translated from English.)
Cramér, H. Matematicheskie metody statistiki, 2nd ed. Moscow, 1975. (Translated from English.)

A. V. PROKHOROV

estimator

A person who, by experience and training, is capable of estimating the probable cost of a building or portion thereof.
References in periodicals archive ?
Zikatanov, "A posteriori error estimator and AMR for discrete ordinates nodal transport methods," Annals of Nuclear Energy, vol.
Suteau, "Analysis of an a posteriori error estimator for the transport equation with SN and discontinuous Galerkin discretizations," Annals of Nuclear Energy, vol.
An ETAS ES1000.3 system was used to measure the relevant outputs (wheel speed, engine speed, gear, torque converter turbine speed and torque, wheel torque) and for real-time implementation of the estimator. An ETK interface allows for a direct reading of the ECU variables.
Caption: Figure 5: Box-plot representing the distribution of mean relative differences in tree density estimation, between each estimator and the quadrat counts, based on the data from the six Pittosporum undulatum plots.
Huang and Yang [26] proposed PCTP estimator in the presence of autocorrelated errors as
The agency's new estimator makes it easy to get your withholding right.
However, it is trivial to show that when a sample contains extreme observations, OLS becomes markedly inferior to outlier robust estimators. (9) The robustness of an estimator is the level of resistance to change that an estimator has to outliers (Andersen, 2008).
Simple error estimator and adaptive procedure for practical engineering analysis, Int J.
"Morpho Pose Estimator" uses Deep Learning, a key technology in artificial intelligence (AI), to estimate postures with high precision.
In this paper, the performance of the recently introduced stochastic restricted estimators, namely, the Stochastic Restricted Ridge Estimator (SRRE) proposed by Li and Yang [14], Stochastic Restricted Almost Unbiased Ridge Estimator (SRAURE), and Stochastic Restricted Almost Unbiased Liu Estimator (SRAULE) proposedbyWu and Yang [15], Stochastic Restricted Principal Component Regression Estimator (SRPCRE) proposed by He and Wu [16], Stochastic Restricted r-k (SRrk) class estimator, and Stochastic Restricted r-d (SRrd) class estimator proposed by Wu [17], was examined in the misspecified regression model when multicollinearity exists among explanatory variables.
In order to address the problem that the filtering accuracy of the conventional filters reduces in the case of inaccurate time-varying noise statistic, a noise statistic estimator based adaptive simplex cubature Kalman filter (ASCKF) is proposed in this paper.
A random sample for the loss is available and the estimated capital is a function of that sample; we call this function the capital estimator. To assess the effectiveness of a capital-setting procedure, the risk measure is applied to the difference between the loss (a random variable representing process variability) minus the capital estimator (a random variable reflecting variability due to estimation).