an automatic control system in which the control actions are changed automatically by a search technique in order to achieve optimal (in some sense) control of an object; the characteristics of the object or the external disturbances may change in a way not known in advance. The principle of automatic search is the basis of the action of adaptive systems.
A searching system differs fundamentally from servomechanisms, stabilization systems without search functions, and programmed regulatory systems. In these other systems, a discrepancy between given values of the parameters being regulated and their current or average values is reduced to permissible limits by an action on the control variables x(t); the action depends on this discrepancy. In this operation, it is necessary that the ratio between the output parameters y(t) of the controlled object and the input parameters x(t) not change the sign:
It is common, however, for a number of various objects or technological and other processes to have static and dynamic characteristics that can change randomly. Examples of this are the flight of an airplane, combustion processes, and many chemical reactions. In these cases, in addition to violation of condition (1), there is often a static interdependence of the extremal type between the target functions, which describe the control goal, and the input action. In such systems, the amount of initial information about the object is inadequate to achieve the control goal. The natural way to supply missing information is to determine it during the operation process.
Figure 1 shows a block diagram of a searching system. The state of the controlled object is determined by control actions x̄(t)= [x1(t), …;, xm(t)], external disturbances f̄(t) =[f1(t), …, fk(t)]. and output parameters ȳ(t) = [y1(t), …, yn(t)]. A searching system includes a unit to formulate the control goal, a unit to organize the search, and the control elements. The goal-generating unit consists of measuring and computing devices; depending on the object condition it generates the control-goal index R̄[x(t)]. The functional R̄[x(t)] may change and readjust depending on variables v̄(t) = [v1(t), …, v1(t)]. The search-organization unit includes logic control devices that are dependent on change in R̄[x(t)]; it generates the command signals q̄(t) required to bring the system close to the assigned value of the control-goal index.
The search is initiated when test actions are fed to the input of the object; there follows an evaluation of the object’s response to these actions, which is manifested in the form of a change in the value of the target function R̄(t). Next, the search-organization unit determines the actions that will change the goal index in the necessary direction. The corresponding signals are then generated and fed to the input of the object, that is, the working actions are applied. After this, searching actions are again applied to the controlled object and the cycle is repeated.
The most common search methods are the Gauss-Seidel method, in which the extremum of the output is searched according to the 1st, 2nd, …, mth coordinates of the input action; the gradient method, in which a new input action is obtained from the preceding one as a result of movement of the system along the gradient of the output functional; the random-search method, in which trial shifts in random directions are used; and the stochastic-approximation method, which involves sequential approximation to the extremum based on the results of preceding search steps with a gradual decrease in the size of the steps.
In the first searching systems, it was necessary to search out and maintain control actions that ensured maximum or minimum (extremal) values of the target function, for example, the maximum flight range for an airplane, the maximum efficiency for a device, the maximum temperature in a furnace, or the mimunum cost of a production process. These searching systems were called optimizing control systems or extremal systems. In 1944, V. V. Kazakevich (USSR) was the first to propose the idea of optimizing control as a new direction in the development of automatic control systems. The main advantage of extremal systems is that they do not require significant initial information about the controlled object. Nor do they require high-precision measuring equipment that gives current information on the object; this equipment need only be sensitive enough to describe the trend (direction) of change in the controlled parameters.
Searching systems are often used jointly with models of the object (seeMODELING). In this case, the optimal values of the object’s parameters are selected by searching not on the object itself but rather on a model of it. Searching systems are applied, for example, in the automatic control of an airplane (an autopilot). They are also used to stabilize a controlled parameter. This is essential where condition (1) is violated. In this case, the target function may have the form R̄ = ǀȳ—ȳ3ǀ or R̄ = (ȳ — ȳ3)2, where ȳ3 is the assigned value of the output parameter, and the searching system must find the minimum R̄(t).
REFERENCESKazakevich, V. V. “Ob ekstremal’nom regulirovanii.” In the collection Avtomaticheskoe upravlenie i vychislitel’naia tekhnika, issue 6. Moscow, 1964.
Fel’dbaum, A. A. Vychislitel’nye ustroistva v avtomaticheskikh sistemakh. Moscow, 1959.
Krasovskii, A. A. Dinamika nepreryvnykh samonastraivaiushchikhsia sistem. Moscow, 1963.
Pervozvanskii, A. A. Poisk. Moscow, 1970.
Rastrigin, L. A. Sistemy ekstremal’nogo upravleniia. Moscow, 1974.
V. V. KAZAKEVICH