Chebyshev's inequality


Also found in: Dictionary, Wikipedia.

Chebyshev's inequality

[′cheb·ə·shəfs ‚in·i′kwäl·əd·ē]
(statistics)
Given a nonnegative random variable ƒ(x), and k > 0, the probability that ƒ(x) ≥ k is less than or equal to the expected value of ƒ divided by k.
References in periodicals archive ?
To mitigate this limitation to some extent, a method to estimate the upper bound of the cumulative probability is proposed by using Chebyshev's inequality. With this estimated upper bound, we could draw the conclusion that although the estimated cumulative probability using the extrapolation method has some uncertainty in its result, the cumulative probability will not exceed the estimated upper bound.
According to Chebyshev's inequality, for a random variable with finite mean value p and finite variance [[sigma].sup.2], the following inequality can be established:
Chebyshev's inequality indicates that no matter what the distribution function is, as long as the mean and the variance are known, an upper bound for the cumulative probability can be given by Equation 9.
Chebyshev's inequality: Let random variable X have a distribution function with
To prove the law of large numbers, use Chebyshev's inequality. First, rewrite (4) as follows: (5a) [Mathematical Expression Omitted] where [Mu] = [N.sub.P] is the mean of the binomial distribution of Y.
The proof is straightforward using Chebyshev's inequality. This version of the law of large numbers is often used in defining insurer's risk (see below).
Berry's improvement on Peddada's sufficient condition was derived using Chebyshev's inequality. Our improvement on Berry's result has been obtained via a tighter probability inequality.
Under the assumption of unimodality, Chebyshev's inequality may be sharpened (though not uniformly).