reliability(redirected from Interobserver Reliability)
Also found in: Dictionary, Thesaurus, Medical, Legal, Financial, Acronyms.
reliabilitythe dependability of data collected, or of the test or measurement used to collect it. A reliable measure is one which gives the same results if the same individuals are measured on more than one occasion.
Reliability describes consistency and this is commonly calculated by a CORRELATION COEFFICIENT. This may be done when social survey data is collected from two samples taken from the same population at the same time, or when the same test is completed by the same people on two different occasions (test-retest reliability), or when two different forms of a test are used (alternate form reliability), or when the similarity between the two halves of a test is calculated (split-half reliability). Compare VALIDITY.
the capacity of a product to maintain the values of predetermined functional parameters within definite limits that correspond to prescribed modes and conditions of use, service, storage, and shipment. Reliability is an integrated characteristic that may include dependability, service life, maintainability, and storage life separately or in particular combinations of characteristics, both for the product as a whole and for its parts, depending on the function and operating conditions.
The main concept used in reliability theory is that of failure, or loss of serviceability, which takes place either abruptly or gradually. A product is serviceable when it meets all the requirements imposed on its basic parameters, among which are speed of response, load characteristics, stability, and accuracy of performance of production operations. Together with other factors, such as weight, size, and convenience of use, the requirements constitute a set of quality indicators for the product. The indicators may vary with time. A change in the indicators that exceeds permissible limits brings about failure (partial or complete failure of the product). The reliability indicators cannot be contrasted with other quality indicators: unless reliability is taken into consideration, all the other quality indicators of a product lose their significance, just as reliability indicators become valuable as quality indicators only in combination with other characteristics of a product.
The concept of “product reliability” has long been used in engineering practice. Any technical equipment—machines, instruments, or attachments—has always been manufactured to last for a certain period of use that is adequate for practical purposes. However, for many years reliability was not measured quantitatively, so that its objective evaluation was very difficult. Such concepts as high reliability, low reliability, and other qualitative definitions were used to estimate it. The establishment of quantitative reliability factors and methods of measuring and calculating them marked the beginning of the scientific methods for the study of reliability. In the first stages of development of reliability theory, attention was concentrated on the collection and processing of statistical data on product failures. In evaluating reliability, the nature of the statement regarding the degree of reliability based on these statistical data was predominant. The development of reliability theory was accompanied by an improvement in the probabilistic methods of study: determination of the laws of distribution of mean cycles between failures and development of methods for calculating and testing products, taking into account the random nature of failure. At the same time, research was begun in new directions: a search for completely new methods of improving reliability, the prediction of falure and reliability, the analysis of physicochemical processes that affect reliability, and the establishment of quantitative relationships among the characteristics of the processes and reliability indicators. Research was also begun on the improvement of methods of calculating reliability for products of increasingly complex structure, taking into account the increasingly large number of factors acting on them (the authenticity of the original data, inspection and preventive treatment, operating conditions and maintenance, and so on). Reliability testing was refined chiefly in the direction of accelerated and nondestructive testing procedures. In addition to the improvement of full-scale tests, the use of mathematical simulation and a combination of the two became widespread. As a result, by the 1950’s the principles of a general theory of reliability and of its special branches for individual types of technology had been formulated.
The main factors that have determined the most important trends in the study of reliability are the increasing complexity of technical devices, the greater criticality of the functions they perform, the more stringent requirements for the quality and operating conditions of products, and the increased role of automation (thereby reducing the feasibility of continuous monitoring of the condition of the apparatus). Technical equipment and its operating conditions are becoming increasingly complicated. Some types of apparatus may have hundreds of thousands of components. If special measures are not taken to ensure reliability, then any modern complex device will be virtually unusable. For example, in a modern electronic digital computer of average productivity, about 5 million changes of state take place per second as a result of switching its binary elements, which number in the tens of thousands. After the five hours of continuous operation that are required to solve a typical problem, more than 1012-1014 changes of state occur in the machine. In this case the probability of at least one failure becomes fairly large, and therefore special measures are necessary to ensure the computer’s serviceability.
Increasingly critical functions are assigned to technical apparatus in production and control. A failure can frequently have catastrophic consequences. Reliability has become a very important problem in the era of the scientific and technical revolution.
Quantitative reliability indicators Product reliability is determined by a set of factors; recommendations on the selection of reliability indicators exist for every type of product. Among the factors that are used to evaluate the reliability of products that can be in two possible states (serviceable or failed) are the average operating time Tav to failure (or mean cycles to failure), mean cycles T between failures, the failure density ʎ(t), the failure rate ѡ(t), the mean time Tr for restoration to a serviceable state, the probability P (t) of failure-free operation during a time t, and the operational readiness To
The distribution law of the mean cycles between failures determines the quantitative reliability factors of nonrecoverable products. The distribution law is written in differential form f (t) or in integral form F (t) for the probability density. The following relationships exist between the reliability indicators and the distribution law:
For recoverable products the probability of n failures during a time t in the case of the simplest flow of failures is governed by the Poisson distribution:
From this it follows that the probability of no failures during time t is P (t) = e-ʎr (the exponential law of reliability).
In reliability theory, technical systems composed of structurally independent subassemblies that are capable of structural reorganization to maintain serviceability in case of failure of individual parts are usually called complex technical systems (in contrast to complex cybernetic systems, which are also called major systems). The number of serviceable states of such systems is two or more. Each serviceable state is characterized by its operating efficiency, which can be measured by the productivity, the probability of accomplishing a given task, and so on. The total probability of serviceability of a system (the sum of the probabilities of all the serviceable states of the system) can be used as the reliability indicator for a complex system.
Methods of determining quantitative reliability indicators. Reliability indicators are found from calculations, by making tests and processing the results (statistical data) of products in service, by simulation on an electronic computer, and by analysis of the physicochemical processes that determine product reliability. Reliability calculations are based on the fact that, for a given product design and distribution law for mean cycles between failures, there is a fully defined relationship between the reliability indicators of the individual components and the reliability of the product as a whole. The steps taken to establish such relationships include solution of the equation composed on the basis of a reliability block diagram (using series-parallel structures) or on the basis of the logical connections among the states of a product (using the algebra of logic), solution of the differential equations describing the process of transition of the product from one state to another (using state diagrams), and compilation of the functions that describe the state of the complex product. Reliability calculations are usually made at the design stage to predict the reliability for a given version of a product. This makes it possible to choose the most suitable version of a design and methods of ensuring reliability by revealing the weak points and by prescribing well-founded operating conditions and the mode and sequence of product maintenance.
Reliability tests are made at the stages of development of an experimental model and of serial production of an article. A distinction is made among (1) definitive reliability tests, from which the reliability indicators are determined; (2) monitoring tests, for checking the quality of a production process, which ensure (with a certain risk) a reliability not lower than the specified reliability; (3) accelerated tests, in which factors that accelerate the onset of failures are used; and (4) nondestructive tests, which use methods of flaw detection and introscopy, as well as indirect criteria, such as noise and thermal radiation, that accompany the onset of failure.
Simulation on an electronic computer is the most efficient means of analyzing the reliability of complex systems. Two simulation algorithms are widespread: the first is based on a simulation of the physical processes taking place in the object under study (in this case the evaluation of reliability depends on the number of times that the parameters of the object exceed permissible limits), and the second is based on the solution of systems of equations that describe the states of the object under study.
The reliability of a product can also be evaluated by analyzing physicochemical processes, because it is often possible to establish the dependence of reliability on the condition and nature of the progress of the physicochemical processes (the correlation of the strength and load factors, wear resistance, the presence of impurities in materials, the variation in electrical and magnetic characteristics, noise effects, and so on). The analysis of physico-chemical processes is most often used to evaluate the reliability of components of electronic apparatus.
Methods of improving reliability.DEVELOPMENTAL STAGE. Steps taken to improve the reliability of products include the use of new materials and components, which have better physico-chemical properties and higher reliability, respectively, compared to those used previously; fundamentally new design treatments, such as the replacement of electron tubes by semiconductor devices and then by integrated circuits; apparatus (component), time, and information redundancy; development of anti-interference programs and anti-interference coding of information; choice of the optimum modes of operation and the most effective protection from adverse internal and external influences; and use of effective inspection, which makes it possible not only to ascertain the technical condition of a product (simple inspection) and to establish the causes for failure (diagnostic inspection) but also to predict the future condition of the product in order to prevent failure (predictive inspection).
DURING PRODUCTION. To increase reliability during the production process, use is made of advanced technology for processing materials and methods of assembling parts. Effective methods (including automatic and statistical methods) are used for monitoring the quality of the technological operations and of the product, and efficient procedures are developed for aging products to reveal latent production flaws. Reliability tests are performed that preclude the acceptance of unreliable products.
DURING USE. During use, predetermined conditions and modes of operation are provided; preventive work is performed, and spare parts, subassemblies, and components, as well as tools and materials, are provided. Diagnostic inspections are also made to prevent failure.
As technology develops, the problem of ensuring reliability takes on new aspects. For example, the introduction of large integrated circuits requires fundamentally new methods of calculating their reliability, and the use of automatic inspection systems leads to the necessity of taking into account the effect of automatic inspection on the reliability indicator. The science of reliability has emerged at the junction of a number of scientific disciplines—the theory of probability and random processes, mathematical logic, thermodynamics, and technical diagnostics —whose development is interrelated and is reflected in the development of reliability theory.
The main trend in the development of the science of reliability is governed by the overall trend of technical development in various sectors of the national economy and by the problems of the economic plans of the country. Among the most urgent problems of reliability theory are the evaluation and assurance of reliability in complex cybernetic systems. The problem of reliability is an “eternal” problem, since it appears in a new formulation at every new stage of development in technology.
REFERENCESShor, Ia. B. Statisticheskie melody analiza i kontrolia kachestva i nadezhnosti. Moscow, 1962.
Berg, A. I. Kibernetika i nadezhnost’. Moscow, 1964.
Gnedenko, B. V., Iu. K. Beliaev, and A. D. Solov’ev. Matematicheskie melody v teorii nadezhnosti. Moscow, 1965.
Sotskov, B. S. Osnovy teorii i rascheta nadezhnosti elementov i ustroistv avtomatiki i vychislitei’noi tekhniki. Moscow, 1970.
Bruevich, N. G. “Kolichestvennye otsenki nadezhnosti izdelii.” In the collection Osnovnye voprosy teorii i praktiki nadezhnosti. Moscow, 1971.
Lloyd, D., and M. Lipow. Nadezhnost’. Moscow, 1964. (Translated from English.)
Bazovsky, I. Nadezhnost’: Teoriia ipraktika. Moscow, 1965. (Translated from English.)
Barlow, R., and F. Proschan. Matematicheskaia teoriia nadezhnosti. Moscow, 1969. (Translated from English.)
N. G. BRUEVICH and T. A. GOLINKEVICH
reliabilityThe trustworthiness to do what the system is expected or designed to do. Reliability metrics include the following averages:
POFOD (probability of failure on demand)
The likelihood that the system will fail when a user requests service. A biometric authentication device that fails to correctly identify or reject users an average of once out of a hundred times has a POFOD of 1%.
ROCOF (rate of occurrence of failure)
The number of unexpected events over a particular time of operation. A firewall that crashes an average of five times every 1,000 hours has a ROCOF of 5 per 1,000 hours.
MTTF (mean time to failure)
The average time between unexpected events. If an IDS fails on average every 300 hours, its MTTF is 300 hours.
AVAIL (availability or uptime)
The percentage of time that a system is available for use, taking into account planned and unplanned downtime. If a system is down an average of four hours out of 100 hours of operation, its AVAIL is 96%.