perceptron(redirected from Perceptrons)
a mathematical model of the process of perception. A person recognizes newly encountered phenomena or objects by classifying them under some concept (a class). Thus, for instance, acquaintances are easily recognized even after a haircut or a change of clothing; manuscripts are easily read although each person’s handwriting has its own distinctive features; and different arrangements of a melody can be recognized as variations on a theme. This ability in humans is called the phenomenon of perception. On the basis of experience, a person can also develop new concepts and learn a new system of classification. For example, in learning to recognize the difference between various letter symbols, a student is shown the symbols and told to which letters the symbols correspond, that is, under which classes the symbols fall, with the result that the student eventually develops the capacity for correct classification.
It is believed that perception is accomplished through a network of neurons. A model of perception can be represented as having three layers of neurons: a sensory layer, or S-layer, an associative layer, or A-layer, and a eactive layer, or R-layer (Figure 1). According to the simplest model, which was proposed by W. McCulloch and W. Pitts, a neuron is a nerve cell that has several inputs and one output. The inputs may be either stimulating or inhibiting. A neuron is excited and sends an impulse if the number of signals at the exciting inputs exceeds the number of signals at the inhibiting input by a certain quantity, which is called the neuron threshold. Depending on the nature of the external stimulus, a collection of impulses, or signals, is formed in the S-layer. Traveling through the nerve pathways, these impulses reach the neurons of the A-layer, where new impulses that are fed to the inputs of R-layer neurons are formed in such a way as to correspond to the collection of impulses that originated in the S-layer. In A-layer neurons, all input signals are summed with the same amplification coefficient, although the sign of the coefficient may differ; both the amplification and sign can differ among signals that are summed in R-layer neurons. The perception of an object corresponds to the excitation of a specific neuron in the R-layer. It is believed that the amplification coefficients of reactive neurons are selected so that the collection of impulses that excite a given R-layer neuron corresponds to an entire class of different objects. A new concept can form once the amplification coefficient of the corresponding reactive neuron becomes established.
In 1957 the American scientist F. Rosenblatt constructed a perceptron that he called Mark 1, a model of a visual analyzer. A photoelectric cell served as a model of a sensory neuron; a threshold unit with an amplification coefficient of ±1 served as a model of an associative neuron, and a threshold unit with adjustable coefficients served as the model of a reactive neuron. The inputs of the A-layer threshold units were connected randomly with the photoelectric cells. Rosenblatt’s perceptron was designed for work in the operation mode and learning mode. The perceptron classified the situations that were presented to it in the operation mode; if of all R elements only the element Ri was stimulated, then the situation fell under the ith class. The amplification factors of the R-layer threshold units were worked out in the course of learning a sequence of examples that were offered for assimilation.
The Mark 1 was the first of several models of perception. Subsequently, the process of perception was investigated with models that were based on digital computers. In the 1960’s models of perception were called perceptrons, or perceptive schemes; in these, a distinction was made between the sensory part, the associative part, and the reactive threshold units. The sensory part forms a vector x̄, which corresponds to each object to be assimilated; this vector is converted by the associative part into the vector ȳ. The vector ȳ belongs to the jth class if the corresponding weighted sum of the reactive Rj element exceeds the response threshold. The mathematical investigation of perceptron schemes is connected with the task of teaching pattern recognition, which determines how the associative part must be constructed and what the algorithm should be for establishing the amplification factor of R units in the learning mode.
REFERENCESRosenblatt, F. Printsipy neirodinamiki. Moscow, 1965. (Translated from English.)
Minsky, M., and S. Papert. Perseptrony. Moscow, 1971. (Translated from English.)
Vapnik, V. N., and A. Ia. Chervonenkis. Teoriia raspoznavaniia obrazov. Moscow, 1974.
V. N. VAPNIK