Also found in: Dictionary, Thesaurus, Medical, Legal, Financial, Wikipedia.
Related to cybernetics: cyborg, Systems theory


cybernetics [Gr.,=steersman], term coined by American mathematician Norbert Wiener to refer to the general analysis of control systems and communication systems in living organisms and machines. In cybernetics, analogies are drawn between the functioning of the brain and nervous system and the computer and other electronic systems. The science overlaps the fields of neurophysiology, information theory, computing machinery, and automation. See servomechanism.


See N. Wiener, Cybernetics (rev. ed. 1961) and The Human Use of Human Beings (1967); F. H. Fuchs, The Brain as a Computer (1973).

The Columbia Electronic Encyclopedia™ Copyright © 2022, Columbia University Press. Licensed from Columbia University Press. All rights reserved.


‘the science of control and communication in the animal and the machine’. As coined by Nobert Weiner in the 1940s (see Weiner, 1949), and stimulated by the advent of modern computing, the term was intended to draw attention to common processes at work in systems of
Collins Dictionary of Sociology, 3rd ed. © HarperCollins Publishers 2000
The following article is from The Great Soviet Encyclopedia (1979). It might be outdated or ideologically biased.



the science of control, communications, and data processing.

Subject. The principal objects of cybernetic research are “cybernetic systems.” In general or theoretical cybernetics such systems are considered in the abstract, without reference to their real physical nature. The high level of abstraction enables cybernetics to find general methods for approaching the study of qualitatively different systems—for example, technological, biological, and even social systems.

The abstract cybernetic system is a set of interrelated objects, called the elements of the system, that are capable of receiving, storing, and processing data, as well as exchanging them. Examples of cybernetic systems are various kinds of automatic control devices in engineering (for example, an automatic pilot or a controller that maintains a constant temperature in a room), electronic computers, the human brain, biological populations, and human society.

The elements of an abstract cybernetic system are objects of any nature whose state can be fully described by the values of a certain set of parameters. For a large majority of the concrete applications of cybernetics the consideration of parameters of two types is sufficient. Parameters of the first type, called continuous parameters, can assume any real value in a certain interval (for example, the interval from — 1 to 2 or from— ∞ to + ∞). Parameters of the second type, called discrete parameters, assume finite sets of values—for example, a value equal to any decimal number or the values “yes” or “no.”

Any whole or rational number can be represented by a sequence of discrete parameters. At the same time, discrete parameters may be used in working with qualitative attributes that are not ordinarily expressed in numbers. To do this it is sufficient to list and designate (for example, using a five-point scale) all distinguishable states of an attribute. In this way it is possible to characterize and introduce into consideration such factors as temperament, mood, and the attitude of one person toward another. By the same token, the area of application of cybernetic systems and cybernetics as a whole extends far beyond the bounds of the strictly “mathematicized” fields of knowledge.

The state of an element of a cybernetic system may change either randomly or under the influence of certain input signals that it receives either from the outside (outside the system under consideration) or from other elements of the system. In turn, each element of the system may form output signals, which usually depend on the state of the element and the input signals it receives at the moment in question. The signals are either transmitted to other elements of the system (acting as input signals for them) or form part of the output signals of the entire system that are transmitted to the outside.

The organization of relationships among elements of a cybernetic system is called the structure of the system. A distinction is made between systems with constant and variable structures. Changes in structure are usually given as functions of the states of all the constituent elements of the system and of the input signals of the system as a whole.

Thus, a description of the rules of the system’s functioning is given by three families of functions: those that determine changes in the states of all elements of the system, those that determine the elements’ output signals, and those that cause changes in the structure of the system. A system is called deterministic if all the functions are conventional (single-valued). However, if the functions—or at least some of them—are random functions, the system is called probabilistic or stochastic. A full description of a cybernetic system results if a description of the system’s initial state—that is, the initial structure of the system and the initial states of all its elements—is added to the description of the rules of its functioning.

Classification of cybernetic systems. Cybernetic systems are distinguished by the nature of their internal signals. If all the signals, like the states of all elements of the system, are given in continuous parameters, the system is called continuous. Where all the magnitudes are discrete, one speaks of a discrete system. In mixed, or hybrid, systems it is necessary to deal with both types of quantities.

The breakdown of cybernetic systems into continuous and discrete is to some extent arbitrary. It is determined by the depth of understanding achieved and by the precision required in studying the object, and sometimes by the convenience of using a particular mathematical technique in studying the system. For example, it is commonly known that light has a discrete, quantum nature; nonetheless, parameters such as the magnitude of a light flux and the level of illumination are customarily characterized by means of continuous values, since adequately smooth change in them has been provided. Another example is the ordinary slide-wire rheostat. Although the magnitude of its resistance changes by jumps, it is possible and convenient to consider the change as continuous where the jumps are small enough.

Inverse examples are even more numerous. The discharging function of the kidney on the conventional (nonquantum) level is a continuous quantity. In many cases, however, a five-point system is considered sufficient for characterizing this function; thus, it is viewed as a discrete quantity. In addition, in any actual computation of the values of continuous parameters one must be limited to a certain level of accuracy, but this means that the corresponding quantity is regarded as discrete.

The last example shows that the discrete representation is a universal method since, bearing in mind that absolute accuracy of measurement is unattainable, any continuous quantity is finally reduced to its discrete representation. Inverse reduction for discrete quantities that assume a small number of different values cannot give satisfactory results (from the point of view of precision of representation) and therefore is not used in practice. Thus, in a certain sense the discrete method of representation is more general than the continuous method.

The division of cybernetic systems into continuous and discrete types is very important from the point of view of the mathematical technique used. For continuous systems this is usually the theory of systems of ordinary differential equations, and for discrete systems it is the theory of algorithms and the theory of automatons. One other basic mathematical theory that is used in the cases of both discrete and continuous systems (and develops accordingly in two aspects) is information theory.

The complexity of cybernetic systems is determined by two factors: the first is the “dimensionality of the system”—that is, the total number of parameters that characterize the states of all its elements; the second is the complexity of the system’s structure, which is determined by the variety and total number of links among its elements. A simple set of a large number of noninterrelated elements, like a set of uniform elements with simple links that repeat from element to element, is not yet a complex system. Complex (major) cybernetic systems are systems whose descriptions cannot be reduced to a description of one element and an indication of the total number of such (uniform) elements.

The method of consolidated representation of the system as a set of individual units, each of which is a separate system, is used in addition to the ordinary breakdown of the system into its elements when studying complex cybernetic systems. A hierarchy of such unit descriptions is used in studying complex systems. At the top of such a hierarchy the entire system is considered as a single unit, and at the lowest level the individual elements of the system appear as the units that make up the systems.

The fact that the very concept of the system element is to some degree arbitrary and depends on the goals set in studying the system and the depth of penetration into the subject must be stressed. Thus, in the phenomenological approach to the study of the brain, when the object of study is not the structure of the brain but the functions it performs, the brain may be regarded as a single element, even though it is characterized by a large number of parameters. The standard approach is to consider individual neurons as the elements that make up the brain. In passing to the cellular or molecular level, each neuron may, in turn, be viewed as a complex cybernetic system.

If the exchange of signals among elements of the system is entirely enclosed within its boundaries, the system is called isolated or closed. When viewed as a single element, such a system has neither input nor output signals. In the general case, open systems have both input and output channels, along which signals are exchanged with the environment. Any open cybernetic system is assumed to be equipped with receptors (sensing devices), which receive signals from outside and transmit them into the system. Where a human being is considered as a cybernetic system, the sense organs (organs of sight, hearing, touch, and so on) are the receptors. The output signals are transmitted to the outside by means of effectors, which in this case are the organs of speech and facial expression, the hands, and so on.

Since every system of signals carries certain information, regardless of whether the system is formed by intelligent beings or the objects and processes of inanimate nature, any open cybernetic system, just as the elements of any system, open or closed, may be regarded as data processors. In this case the concept of data or information is viewed in a very broad sense, close to the physical concept of entropy.

Cybernetic approach to the study of various kinds of objects. Consideration of various animate and inanimate objects as data processors or systems made up of elementary data processors is the essence of the “cybernetic approach” to the study of such objects. This approach, like the approaches based on other fundamental sciences, such as mechanics and chemistry, demands a certain level of abstraction. Thus, in the cybernetic approach to the study of the brain as a system of neurons, their dimensions, shape, and chemical structure are usually disregarded. The states of the neurons (excited or unexcited), the signals they produce, the connections among them, and the rules of changes in their states become the objects of study.

The simplest data processors can process information of only one type. For example, a functioning doorbell always responds to pressure on the button (the receptor) with the same action: the bell rings. However, complex cybernetic systems usually are able to accumulate data in some form and accordingly to vary the actions they perform (data processing). By analogy with the human brain, this property of cybernetic systems is sometimes called memory.

There are two principal ways in which information can be “memorized” in cybernetic systems: by a change in the states of the system’s elements or by a change in the structure of the system (of course, a mixed variant is also possible). There is essentially no fundamental difference between the two types of “memory.” In most cases the difference depends only on the approach used in describing the system. For example, one current theory explains long-term human memory by changes in the conductivity of the synapses (the connections among the separate neurons that make up the brain). If only the neurons are considered as the elements that make up the brain, then change in the synapses should be regarded as change in the structure of the brain, but if all the synapses (regardless of the level of their conductivity) are included, along with the neurons, then the phenomenon under consideration is reduced to a change in the states of the elements, with an unchanged structure of the system.

Computers as data processors. Among the complex technical data processors the most important for cybernetics is the electronic computer. In the simpler computing machines—electromechanical digital and analog types—adjustment for various tasks is done by changing the system of links among the elements on a special switching console. In modern general-purpose computers such changes are made by machine “memorization” of particular working programs in a special unit that accumulates information.

Unlike analog machines, which work with continuous information, the modern computer handles discrete information. Any sequences of decimal numbers, letters, punctuation signs, and other symbols may appear as information at the input and output of the computer. Inside the machine this information is usually represented (or coded) in a sequence of signals that assume only two values.

Although the capabilities of analog machines (like any other artificially created units) are limited to the conversion of strictly defined types of information, the modern computer is versatile. This means that any conversions of alphanumeric information that can be defined by a random, finite system of rules of any kind (arithmetic, grammatical, and so on) can be performed by the computer after it is fed a properly written program. Digital computers achieve this capability by the versatility of their instruction code, that is, the elementary data processing that is included in the structure of the computers. In the same way that all kinds of buildings can be assembled from the same parts, all kinds of alphanumeric information conversions, of any complexity, can be composed of elementary conversions. The computer program is just such a sequence of elementary conversions.

The computer’s property of versatility is not confined to alphanumeric information. As shown by the theory of coding, any discrete information—as well as any random continuous information (with any given degree of precision)—can be represented in alphanumeric (and even simple numeric) form. Thus, modern computers can be considered as universal data processors. The human brain, although based on entirely different principles, is another well-known example of a universal data processor.

The modern computer’s property of versatility makes possible its use to simulate any other conversions of information, including any thinking processes. This puts computers in a special position: from the moment of their appearance they have been the main technical equipment and research device of cybernetics.

Control in cybernetic systems. In the cases considered thus far, changes in the behavior of the digital computer have been determined by the human being who changes the program of its operation. However, it is possible to write a program that changes the working program of the computer and organizes its communication with the environment through an appropriate system of receptors and effectors. In this way various forms of change in behavior and development that are observed in complex biological and social systems can be simulated. Change in the behavior of complex cybernetic systems is a result of the accumulation of appropriately processed information received by the systems in the past.

Two main types of change in system behavior are distinguished, depending on the form of “memorization” of the information: self-adjustment and self-organization. In self-adjusting systems the accumulation of experience is expressed in a change in the values of particular parameters, and in self-organizing systems it occurs as change in the structure of the system. As was mentioned earlier, this difference is to some degree arbitrary and depends on the way in which the system is broken down into elements. In practice, self-adjustment is usually related to changes in a comparatively small number of continuous parameters. Profound changes in the structure of the computer’s working programs, which can be interpreted as changes in the states of a large number of discrete memory elements, are more naturally viewed as examples of self-organization.

Purposeful change in the behavior of cybernetic systems occurs through control. The purposes of control vary greatly depending on the types of systems and the degree of their complexity. In the simplest case the purpose may be to maintain a particular parameter at a constant value. For more complex systems the goals may be adaptation to a changing environment or even learning the rules of the changes.

The presence of control in a cybernetic system means that the system may be represented in the form of two interacting units: the object of control and the control system. The control system transmits control information along direct-link channels through the corresponding set of effectors to the controlled object. Information on the state of the controlled object is received by means of receptors and transmitted back to the control system along feedback channels.

Like any cybernetic system, the system with control described here can also have channels for communication (with appropriate systems of receptors and effectors) with the environment. In the simplest cases the external environment may appear as a source of various types of noise and distortion in the system (most frequently in the feedback channel). In this case the task of the control system includes noise filtering. This task becomes especially important in remote control, where signals are transmitted over lengthy communications channels.

The principal task of the control system is to convert information coming into the system and shape control signals in such a way as to ensure the best possible achievement of the goals of control. The main types of control are distinguished on the basis of the types of such goals and the nature of the control system’s functioning.

One of the simplest kinds of control is program control. The goal of such control is to feed a particular, strictly defined sequence of control signals to the controlled object. Such control has no feedback. The simplest example of such program control is the automatic traffic light whose changes occur at set moments. More complex control of the traffic light, with counters of approaching vehicles, can include a very simple “threshold” feedback signal; the light changes every time the number of waiting vehicles exceeds a given quantity.

Classical automatic control, whose purpose is to maintain a particular parameter (or several independent parameters) at a constant value, is also a very simple kind of control. A system for automatic control of the air temperature in a room may serve as an example. A special thermometer-transmitter measures the air temperature T, and the control system compares this temperature with the given quantity T0 and sends the control information — k(T — T0) to a gate, which regulates the flow of warm water into the central heating units. The minus sign of the coefficient k signifies negative-feedback control—that is, when the temperature T rises above the given threshold T0, the flow of heat decreases, and when it drops below the threshold the flow increases. Negative feedback is essential to provide stability in the control process. System stability means that where there is a deviation in either direction from the equilibrium position (where T =T0) the system automatically tries to restore the equilibrium. With the very simple assumption that there is a linear relationship between the control information and the velocity of the flow of heat into the room, the operation of such a regulator is described by the differential equation dT/dt = — k(TT0), whose solution is the function T = T0 + δ · e-kt (where δ is the deviation of temperature T from the assigned value T0 at the initial moment). Since this system is described by a linear differential equation of the first order, it is called a linear system of the first order. Linear systems of the second and higher orders, and particularly nonlinear systems, have more complex behavior.

Systems are possible in which the principle of program control is combined with the task of regulation in the sense of maintaining a constant value for some particular quantity. For example, a program device that changes the value of the parameter being regulated can be built into the room temperature regulator described above. The functions of such a device may be maintenance of the temperature at + 20°C during the day and reduction to + 16°C at night. In this case the function of simple regulation grows into the function of monitoring the value of the parameter being changed by the program.

In more complex servomechanisms the task is to maintain as exactly as possible some fixed functional relationship between a set of randomly changing parameters and a given set of parameters being regulated. An example is the system that continuously follows a randomly maneuvering airplane with a searchlight beam.

In optimum control systems the basic purpose is to maintain a maximum or minimum value of some function of two groups of parameters; the function is called the criterion of optimality. The parameters of the first group (external conditions) change independently of the system, and the parameters of the second group are regulated—that is, their values can change under the influence of control signals from the system.

The simplest example of optimal control is again the task of regulating room air temperature, with the added condition of considering changes in its humidity. The air temperature that gives the feeling of greatest comfort depends on the humidity of the air. If humidity is always changing but the system can only control change in temperature, the goal of control will naturally be to maintain the temperature that gives the feeling of greatest comfort. This is the task of optimum control. Optimum control systems are very important in control of the economy.

In the simplest case optimal control can be reduced to the task of maintaining the maximum or minimum possible value of the regulated parameter under the given conditions. In this case one speaks of extremal control systems.

If unregulated parameters in the optimal control system change in a particular time interval, the function of the system reduces to maintenance of the constant values of the regulated parameters that ensure maximization (or minimization) of the desired criterion of optimal control. Here too, as in the case of classical control, the problem of stability of control arises. When planning relatively uncomplicated systems such stability is achieved by appropriate selection of parameters for the system being planned. In more complex cases, where the number of disturbing influences and the dimensionality of the system are very large, the use of self-adjustment and self-organization to achieve stability is sometimes convenient. In this case some of the parameters that determine the nature of the links existing in the system are not preset and can be changed by the system during its operation. The system has a special unit that records the nature of transitional processes in the system when it is put out of balance. When a transitional process is found to be unstable, the system changes the values of the parameters of the links until stability is achieved. Systems of this type are usually called ultrastable.

With large numbers of varying parameters of the links, a random search for stable modes may be too time-consuming. In this case various methods of restricting the random search are used—for example, breaking the parameters down into groups and searching within just one group (determined by particular signs). Systems of this type are called multistable. Biology offers a great variety of ultrastable and multistable systems, such as the system for regulation of blood temperature in humans and warm-blooded animals.

The task of grouping external influences, which is essential for successful selection of the method of self-adjustment in multistable systems, is one of the tasks of recognition (pattern recognition). Visual and auditory images are particularly important in determining the type of behavior (method of control) in human beings. The possibility of recognizing patterns and joining them in particular classes enables the human being to create abstract concepts, which are an essential condition for conscious awareness of activity and the beginning of abstract thinking. Abstract thinking makes possible the creation in the control system—in this case, the human brain—of models of various processes, their use to extrapolate activity, and the determination of one’s actions on the basis of such extrapolation.

Thus, at the highest levels of the hierarchy of control systems the tasks of control are closely interwoven with the tasks of recognizing surrounding reality. In pure form these tasks manifest themselves in abstract cognitive systems, which are also one of the classes of cybernetic systems.

The theory of reliability of cybernetic systems has an important place in cybernetics. Its task is the development of methods of constructing systems that ensure correct functioning of the systems when some of their elements malfunction, particular links are broken, or other possible accidental trouble occurs.

Methods of cybernetics. With the study of cybernetic systems as its primary object, cybernetics uses three fundamentally different methods of investigation. Two of these, mathematical analysis and the experimental method, are widely used in other sciences. The essence of mathematical analysis is the description of the object of study within the framework of a particular mathematical approach (for example, in the form of a system of equations) and then the study of the various consequences of the description using mathematical deduction (for example, by solving the system of equations). In the experimental method, various experiments are conducted, either with the object itself or with a real physical model of it. If the object under study is unique and there is no possibility of a substantial influence on it (as is the case, for example, with the solar system or the process of biological evolution), the active experiment becomes passive observation.

One of the most important achievements of cybernetics is the development and broad use of a new method of research, which has come to be called mathematical (machine) experimentation or mathematical simulation. The essence of the method is that experiments are done not with a real physical model of the object under study but rather with a description of the object. The description of the object and programs that produce changes in the characteristics of the object in accordance with its description are entered in the memory of a computer; various experiments may then be conducted with the object, such as recording its behavior under certain conditions and changing individual elements of the description. The great speed of modern computers often makes possible the simulation of many processes at a rate much faster than normal.

The first stage of mathematical simulation is the breakdown of the system being studied into separate units and elements and the establishment of the links among them. This function is performed by systems analysis. The depth and method of the breakdown may vary depending on the purposes of the investigation. In this sense systems analysis is more of an art than an exact science, since parts and links that are insignificant from the point of view of the assigned goal must be discarded a priori in the analysis of truly complex systems.

After the system is broken down into parts and the parts have been described with particular sets of quantitative or qualitative parameters, representatives of the various sciences are usually brought in to establish the links among them. Thus, during systems analysis of the human organism typical links have the following form: “When organ A passes from state k1 to state k2 and organ B remains in state M, organ C will, with probability p, pass from state n1 to state n2 in N months.” The statement may be made by an endocrinologist, cardiologist, internist, or other specialist, depending on the type of organs to which it refers. The result of their combined work is a composite description of the organism, which is the mathematical model being sought. Systems programmers translate this model into machine notation, at the same time programming the means necessary for experimenting with it. The conduct of the actual experiments and the drawing of various conclusions from them is the work of operations research. Where possible, however, operations researchers can use deductive mathematical constructions and even physical models of the entire system or of its separate parts. The job of constructing physical models and the task of planning and making various artificial cybernetic systems are part of systems engineering.

Historical survey. The ancient Greek philosopher Plato was apparently the first to use the term “cybernetics” for control in the general sense. However, the actual formation of cybernetics as a science took place much later and was determined by the development of technical apparatus for control and data processing. So-called androids, which were human-like toys that were in fact mechanical, program-controlled devices, were made in Europe as early as the Middle Ages.

The first industrial regulators, for the water level in a steam boiler and for the speed of shaft rotation of a steam engine, were invented by I. I. Polzunov (Russia) and J. Watt (England). In the second half of the 19th century, increasingly refined automatic regulators were required. Electromechanical and electronic units were used with increasing frequency in such regulators, along with mechanical units. The invention in the early 20th century of differential analyzers, which were capable of simulating and solving systems of ordinary differential equations, played a large role in development of the theory and practice of automatic control. These machines marked the beginning of the rapid development of analog computers and their widespread introduction into engineering.

Progress in neurophysiology, in particular the classic works of I. P. Pavlov on conditioned reflexes, exerted a substantial influence on the establishment of cybernetics. The original work by the Ukrainian scientist Ia. I. Grdina on the dynamics of living organisms is also worthy of note.

In the 1930’s the development of cybernetics began to be increasingly influenced by the development of the theory of discrete data processors. Two main sources of ideas and problems directed this development. The first was the task of constructing the foundations of mathematics. As early as the mid-19th century, G. Boole laid the foundations of modern mathematical logic. In the 1920’s the foundations of the modern theory of algorithms were laid. In 1934, K. Gödel demonstrated the finiteness of closed cognitive systems. In 1936, A. M. Turing described a hypothetical general-purpose discrete data processor, which later came to be called the Turing machine. These two results, obtained within the framework of pure mathematics, exerted and continue to exert a very large influence on the formation of the basic ideas of cybernetics.

The second source of ideas and problems in cybernetics was practical experience in building actual discrete data processors. The simplest mechanical adding machine was invented by B. Pascal (France) in the 17th century. Only in the 19th century did C. Babbage (England) make the first attempt to build an automatic digital calculator, the prototype of the present-day electronic digital computer. By the early 20th century the first models of electromechanical tabulating machines were built and made possible automation of very simple processing of discrete data. The necessity of building sophisticated relay-contact devices, primarily for automatic telephone exchanges, led in the 1930’s to a sharp increase in interest in the theory of discrete data processors. In 1938, C. Shannon (USA) and, in 1941, V. I. Shestakov (USSR) demonstrated the possibility of using the techniques of mathematical logic to analyze relay-contact circuits. This marked the beginning of the development of the modern theory of automatons.

The development of electronic computers in the 1940’s (J. von Neumann and others) was of decisive importance for the formation of cybernetics. The computer opened up fundamentally new possibilities for research and actual construction of complex control systems. It remained to bring together all the material accumulated by that time and to give the new science a name. This step was taken by N. Wiener, who in 1948 published his famous book Cybernetics.

Wiener proposed that the “science of control and communications in the animal and the machine” be called cybernetics. In Cybernetics and his second book, Cybernetics and Society (1954), Wiener devoted particular attention to the general philosophical and social aspects of the new science, frequently treating them in a highly arbitrary manner. As a result the further development of cybernetics followed two paths. In the USA and Western Europe the narrow understanding of cybernetics began to predominate; this concentrated attention on the disputes and doubts raised by Wiener and on the analogies between control processes in technical devices and in living organisms. In the USSR, after an initial period of negation and doubts, the more natural and meaningful definition of cybernetics took root; this included in the field all achievements that had accumulated in the theories of data processing and control systems. Special attention in this was given to the new problems arising in connection with the extensive introduction of computers in the theories of control and data processing.

In the West these questions were treated within the framework of specialized areas of science, which came to be called information science, computer science, systems analysis, and so on. Only at the end of the 1960’s was a tendency observed to broaden the concept of “cybernetics” and include all these areas in it.

Principal divisions of cybernetics. Contemporary cybernetics in the broad sense consists of a large number of divisions, which represent independent scientific areas. Theoretically, the nucleus of cybernetics is made up of information theory, coding theory, the theory of algorithms and automatons, general systems theory, the theory of optimal processes, the methods of operations research, pattern recognition theory, and the theory of formal languages. In practice the center of interest in cybernetics has shifted to the construction of complex control systems and various kinds of systems for the automation of mental labor. On the purely cognitive level one of the most interesting future tasks of cybernetics is the simulation of the brain and its various functions.

Computers are the chief technical facility for accomplishing all these tasks. Therefore, the development of cybernetics in both the theoretical and practical aspects is closely linked to progress in electronic computer engineering. The demands made by cybernetics for development of its mathematical techniques are determined by the practical tasks mentioned above.

A certain practical orientation in research on the development of mathematical techniques is in fact the line that divides the general mathematical part of such research from the purely cybernetic part. Thus, for example, in the part of algorithm theory that is being constructed for the needs of the foundations of mathematics an effort is made to reduce the number of types of elementary operations to a minimum and to make them minor. The algorithmic languages that result are convenient as objects of study, but at the same time it is virtually impossible to use them to describe the real tasks of data processing. The cybernetic aspect of algorithm theory is concerned with algorithmic languages that are especially oriented to particular classes of practical problems. Languages exist that are oriented toward computational problems, formula translations, the processing of graphic information, and so on.

A similar situation occurs in other areas that make up the general theoretical foundation of cybernetics. They provide the approach for solving the practical problems of the study of cybernetic systems, their analysis and synthesis, and the determination of optimal control.

Methods of cybernetics are particularly important in sciences in which the methods of classical mathematics can be applied only on a limited scale, to solve certain particular problems. Foremost among these sciences are economics, biology, medicine, linguistics, and fields of engineering that deal with complex systems. As a result of the extensive application of cybernetic methods in these sciences, independent scientific areas have arisen that would presumably be called cybernetic economics, cybernetic biology, and so on. For a number of reasons, however, the formation of these fields took place within the framework of cybernetics through specialization of the objects of research rather than in the corresponding sciences through application of the methods and results of cybernetics. Therefore, these fields came to be called economic cybernetics, biological cybernetics, medical cybernetics, and engineering cybernetics. The corresponding area in linguistics has come to be called mathematical linguistics.

The tasks of the actual construction of complex control systems (above all in economics), as well as computer-based complex information retrieval systems, automatic design systems, and systems for automatic collection and processing of experimental data, usually belong to the area of the science that has come to be called systems engineering. With the broad interpretation of the subject of cybernetics, most of systems engineering is organically contained within it. The same is true of electronic computer engineering. Needless to say, cybernetics does not occupy itself with designing the elements of computers, structural design of machines, production engineering problems, and so on. At the same time, the approach to the computer as a system, general structural questions, and the organization of complex data processing processes and control of these processes in actuality belong to applied cybernetics and constitute one of its important areas.


Wiener, N. Kibernetika, 2nd ed. Moscow, 1968. (Translated from English.)
Wiener, N. Kibernetika i obshchestvo. Moscow, 1958. (Translated from English.)
Tsien, H. S. Tekhnicheskaia kibernetika. Moscow, 1956. (Translated from English.)
Ashby, W. R. Vvedenie v kibernetiku. Moscow, 1959. (Translated from English.)
Glushkov, V. M. Vvedenie v kibernetiku. Kiev, 1964.


The Great Soviet Encyclopedia, 3rd Edition (1970-1979). © 2010 The Gale Group, Inc. All rights reserved.


(science and technology)
The science of control and communication in all of their manifestations within and between machines, animals, and organizations.
Specifically, the interaction between automatic control and living organisms, especially humans and animals.
McGraw-Hill Dictionary of Scientific & Technical Terms, 6E, Copyright © 2003 by The McGraw-Hill Companies, Inc.


The study of communication and control within and between humans, machines, organizations, and society. This is a modern definition of the term cybernetics, which was first utilized by N. Wiener in 1948 to designate a broad subject area he defined as “control and communication in the animal and the machine.” A distinguishing feature of this broad field is the use of feedback information to adapt or steer the entity toward a goal. When this feedback signal is such as to cause changes in the structure or parameters of the system itself, it appears to be self-organizing. See Adaptive control

Wiener developed the statistical methods of autocorrelation, prediction, and filtering of time-series data to provide a mathematical description of both biological and physical phenomena. The use of filtering to remove unwanted information or noise from the feedback signal mimics the selectivity shown in biological systems in which imprecise information from a diversity of sensors can be accommodated so that the goal can still be reached.

McGraw-Hill Concise Encyclopedia of Engineering. © 2002 by The McGraw-Hill Companies, Inc.


the branch of science concerned with control systems in electronic and mechanical devices and the extent to which useful comparisons can be made between man-made and biological systems
Collins Discovery Encyclopedia, 1st edition © HarperCollins Publishers 2005


/si:`b*-net'iks/ The study of control and communication in living and man-made systems.

The term was first proposed by Norbert Wiener in the book referenced below. Originally, cybernetics drew upon electrical engineering, mathematics, biology, neurophysiology, anthropology, and psychology to study and describe actions, feedback, and response in systems of all kinds. It aims to understand the similarities and differences in internal workings of organic and machine processes and, by formulating abstract concepts common to all systems, to understand their behaviour.

Modern "second-order cybernetics" places emphasis on how the process of constructing models of the systems is influenced by those very systems, hence an elegant definition - "applied epistemology".

Related recent developments (often referred to as sciences of complexity) that are distinguished as separate disciplines are artificial intelligence, neural networks, systems theory, and chaos theory, but the boundaries between those and cybernetics proper are not precise.

See also robot.

The Cybernetics Society of the UK.

American Society for Cybernetics.

IEEE Systems, Man and Cybernetics Society.

International project "Principia Cybernetica".

Usenet newsgroup:

["Cybernetics, or control and communication in the animal and the machine", N. Wiener, New York: John Wiley & Sons, Inc., 1948]
This article is provided by FOLDOC - Free Online Dictionary of Computing (


The comparative study of human and machine processes in order to understand their similarities and differences. Cybernetics often refers to machines that imitate human behavior. The term was coined in 1948 by Norbert Wiener (1894-1964), one of the great mathematicians of the 20th century, as "the scientific study of control and communication in the animal and the machine." The word means "governance" in Greek. See cyber, AI, techno-humanism and robot.
Copyright © 1981-2019 by The Computer Language Company Inc. All Rights reserved. THIS DEFINITION IS FOR PERSONAL USE ONLY. All other reproduction is strictly prohibited without permission from the publisher.
References in periodicals archive ?
The Cybernetics Moment: Or Why We Call Our Age the Information Age
To accomplish this, Dyer-Witheford canvasses the work of labour scholars, Marxists, political economists, and theorists of post-industrialism in an effort to enable an understanding of the function of cybernetics in the circuits of capital and resistance.
Tong, "Observer-based adaptive fuzzy control for switched stochastic nonlinear systems with partial tracking errors constrained," IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol.
I mean, it's always a dangerous thing, but at the same time, for me there is an excess amount of simplification we've created around terms like militarization or securitization, or even cybernetics, which, still for the most part, has a very negative valence.
Perhaps the key difference that the kinds of shifts that are usually imputed to cybernetics, information theory, computer science make--and this is what can help us understand how the seeds of control are already present in discipline --has to do with the more profoundly abstracted nature of the materialisation of control when expressed in the kinds of numerical languages that digital technologies operate through.
by billions", to the American Society for Cybernetics' definition of cybernetics the common theme has been interconnectedness and interaction among people and systems.
Therefore we can claim that has been approved the main hypothesis of this research, namely there is significant relationship between cybernetics systems and effectiveness.
It is indicative of the success of this transfer of the term self-organisation from the science of cybernetics to the anarchist lexicon that the awareness of the origins of the term is nearly absent in the contemporary radical discourse that employs it.
The new Cybernetics iSAN 7000 series is tailored to meet even the most taxing enterprise-level storage demands with ease while growing comfortably with your business, up to an incredible 448 TB.
Andrew Pickering discusses cybernetics as "a postwar science of the adaptive brain" (p.
These are Norbert Weiner's invention of cybernetics; the US military's SAGE (Semi-Automated Ground Environment) project, which laid the groundwork for the American computing industry; and SIMNET, a program that marked the beginning of the military's use of real-time networked games as training simulations.