(redirected from Perceptrons)


(computer science)
A pattern recognition machine, based on an analogy to the human nervous system, capable of learning by means of a feedback system which reinforces correct answers and discourages wrong ones.
McGraw-Hill Dictionary of Scientific & Technical Terms, 6E, Copyright © 2003 by The McGraw-Hill Companies, Inc.
The following article is from The Great Soviet Encyclopedia (1979). It might be outdated or ideologically biased.



a mathematical model of the process of perception. A person recognizes newly encountered phenomena or objects by classifying them under some concept (a class). Thus, for instance, acquaintances are easily recognized even after a haircut or a change of clothing; manuscripts are easily read although each person’s handwriting has its own distinctive features; and different arrangements of a melody can be recognized as variations on a theme. This ability in humans is called the phenomenon of perception. On the basis of experience, a person can also develop new concepts and learn a new system of classification. For example, in learning to recognize the difference between various letter symbols, a student is shown the symbols and told to which letters the symbols correspond, that is, under which classes the symbols fall, with the result that the student eventually develops the capacity for correct classification.

Figure 1. Simplified schematic diagram of a perceptron: S units represent sensory neurons; A units represent associative neurons; R units represent reactive neurons; arrows indicate the direction of impulses through synaptic junctions

It is believed that perception is accomplished through a network of neurons. A model of perception can be represented as having three layers of neurons: a sensory layer, or S-layer, an associative layer, or A-layer, and a eactive layer, or R-layer (Figure 1). According to the simplest model, which was proposed by W. McCulloch and W. Pitts, a neuron is a nerve cell that has several inputs and one output. The inputs may be either stimulating or inhibiting. A neuron is excited and sends an impulse if the number of signals at the exciting inputs exceeds the number of signals at the inhibiting input by a certain quantity, which is called the neuron threshold. Depending on the nature of the external stimulus, a collection of impulses, or signals, is formed in the S-layer. Traveling through the nerve pathways, these impulses reach the neurons of the A-layer, where new impulses that are fed to the inputs of R-layer neurons are formed in such a way as to correspond to the collection of impulses that originated in the S-layer. In A-layer neurons, all input signals are summed with the same amplification coefficient, although the sign of the coefficient may differ; both the amplification and sign can differ among signals that are summed in R-layer neurons. The perception of an object corresponds to the excitation of a specific neuron in the R-layer. It is believed that the amplification coefficients of reactive neurons are selected so that the collection of impulses that excite a given R-layer neuron corresponds to an entire class of different objects. A new concept can form once the amplification coefficient of the corresponding reactive neuron becomes established.

In 1957 the American scientist F. Rosenblatt constructed a perceptron that he called Mark 1, a model of a visual analyzer. A photoelectric cell served as a model of a sensory neuron; a threshold unit with an amplification coefficient of ±1 served as a model of an associative neuron, and a threshold unit with adjustable coefficients served as the model of a reactive neuron. The inputs of the A-layer threshold units were connected randomly with the photoelectric cells. Rosenblatt’s perceptron was designed for work in the operation mode and learning mode. The perceptron classified the situations that were presented to it in the operation mode; if of all R elements only the element Ri was stimulated, then the situation fell under the ith class. The amplification factors of the R-layer threshold units were worked out in the course of learning a sequence of examples that were offered for assimilation.

The Mark 1 was the first of several models of perception. Subsequently, the process of perception was investigated with models that were based on digital computers. In the 1960’s models of perception were called perceptrons, or perceptive schemes; in these, a distinction was made between the sensory part, the associative part, and the reactive threshold units. The sensory part forms a vector , which corresponds to each object to be assimilated; this vector is converted by the associative part into the vector . The vector ȳ belongs to the jth class if the corresponding weighted sum of the reactive Rj element exceeds the response threshold. The mathematical investigation of perceptron schemes is connected with the task of teaching pattern recognition, which determines how the associative part must be constructed and what the algorithm should be for establishing the amplification factor of R units in the learning mode.


Rosenblatt, F. Printsipy neirodinamiki. Moscow, 1965. (Translated from English.)
Minsky, M., and S. Papert. Perseptrony. Moscow, 1971. (Translated from English.)
Vapnik, V. N., and A. Ia. Chervonenkis. Teoriia raspoznavaniia obrazov. Moscow, 1974.


The Great Soviet Encyclopedia, 3rd Edition (1970-1979). © 2010 The Gale Group, Inc. All rights reserved.




A network of neurons in which the output(s) of some neurons are connected through weighted connections to the input(s) of other neurons. A multilayer perceptron is a specific instance of this.
This article is provided by FOLDOC - Free Online Dictionary of Computing (
References in periodicals archive ?
An MLP is a network of neuron called perceptrons. The perceptron is a binary classifier which compute a single output from multiple inputs (and the 'bias', a constant term that does not depend on any input value) as function of its weighted sum.
Early success with learning by neuron-like models called perceptrons [331 excited many researchers.
Perceptrons can only classify linearly separable sets of instances.
Later, the pioneering AI researchers Marvin Minsky and Seymour Papert published an influential book titled Perceptrons which identified challenges with single-layer perceptrons and expressed skepticism about multilayer perceptrons.
Of the different ANN types (LN, MLP1, MLP2, and RBF) used in our study for the classification of calving difficulty records in cows, the lowest RMSE was characteristic of MLP2 and MLP1 for the two-class and three-class systems, respectively, although in the latter case, the RMSE values were very similar between multilayer perceptrons. In the case of the three-class system, MLP1 also had the lowest [G.sup.2] value indicating its good fit to the data, however, GDA had the lowest values of AIC and BIC due to its lower complexity compared with the MLP1 model.
Single and Multilayered Perceptrons. In a study by Pedersen, Togelius, and Yannakakis [24], the relationship between parameters of level design of platform games, player experience, and individual characteristics of play were studied.
The layer consists of a stack of several perceptrons that represents a nonlinear relationship between the weighted sums of inputs and outputs.
To the best of our knowledge, neural networks, or more accurately, Multi-layer Perceptrons (MLPs) have not been used for pedestrian detection.
Mukhopadhyay, "A polynomial time algorithm for the construction and training of a class of multilayer perceptrons," Neural Networks, vol.
Recently, organic memristive devices have been used for the realization of elementary and bilayer perceptrons that are neural networks able to implement basic brain-inspired learning functionalities and parallel processing; moreover, they are able to solve classification tasks, for example, classification of linearly separable and nonseparable groups of objects [18, 19].
Minsky wrote a book called Perceptrons, describing a particular type of "artificial neural network", mimicking how nerves are wired in the brain.