dual control(redirected from dual-control)
Also found in: Dictionary.
dual control[¦dü·əl kən′trōl]
a form of control in which the control operation serves simultaneously for the study of the controlled object and for bringing this object into the optimal state. Dual control is used when the equations of motion of the object are unknown and when the initial information is insufficient for the calculation of the optimal control law. Certain characteristics of dual control can be found in systems of different categories.
In automatic control systems the information about the object of control is formed from the information defining the dependence of the output value on the control operation, from information about the state of the object, from information about the disturbances and interference affecting the object, and from information about the input interactions and aim of control. In systems with complete information before the beginning of the functioning, there is complete a priori information and current information is received by the control device through the feedback circuit during the functioning of the system. In systems with incomplete information the effect of the operation itself is a priori unknown; only the statistical characteristics of random input actions are available. Operations of these systems are based on the accumulation of the lacking information during the functioning process itself. Such systems are called optimal systems with independent accumulation of information since the process of accumulation is not dependent on the algorithm of the control device. In the dual control system, the active study of random changes in the characteristics of the controlled object is provided for. At the same time, the object is subjected to “studying interactions,” and the reaction of the object is analyzed by the control device. In this way, the controlling operation is used not only for the control of the object but simultaneously for its study.
The theory of dual control was developed by the Soviet scientist A. A. Feldbaum in the late 1950’s. This theory has been applied for the most part to discrete systems. In this area, the basis for developing the work algorithm of the control device is the theory of statistical computing, and the quality index is the mathematical expectation of the general functions of losses, called the average risk.
REFERENCESFeldbaum, A. A. Osnovy teorii optimal’nykh avtomaticheskikh sistem, 2nd ed. Moscow, 1966.
Tsypkin, Ia. Z. Adaptatsia i obuchenie v avtomaticheskikh sistemakh. Moscow, 1968.
A. L. GORELIK