Hopfield network


Also found in: Wikipedia.

Hopfield network

(artificial intelligence)
(Or "Hopfield model") A kind of neural network investigated by John Hopfield in the early 1980s. The Hopfield network has no special input or output neurons (see McCulloch-Pitts), but all are both input and output, and all are connected to all others in both directions (with equal weights in the two directions). Input is applied simultaneously to all neurons which then output to each other and the process continues until a stable state is reached, which represents the network output.
References in periodicals archive ?
Using improved Hopfield network, the above problem could be mapped to a dynamic circuit, and its solution may be obtained within circuit time-constant [2].
Connecting weights must be given before using the improved Hopfield network to perform Finite Element Analysis (FEA) of the structure real-timely [6].
Some specific areas examined include virtual reality simulation and analysis of handling stability for forest fire patrolling vehicles, experimental research on multimedia teaching for sports aerobics, a clustering algorithm in wireless networks, and a license plate recognition system based on an orthometric Hopfield network.
Layered network is an example for feed forward network, while Hopfield network is an example of feedback network.
Under this approach, the Hopfield network is presented together with its training method.
In the Hopfield network all the neurons are connected to one another; if we label every node as [x.
A survey on image processing with neural networks reported several types of neural networks that have been applied to perform image segmentation: multilayer perceptron, self-organizing maps, Hopfield networks, probabilistic neural networks, radial basis function networks, cellular neural networks, constraint satisfaction networks, and RAM-based neural networks (Egmont-Petersen et al.
Later, in the associate memory discussion, the reader trips over Hopfield networks and Hebb's rule.
Many ANN configurations and training algorithms have been used to build electronic noses, including back-propagation-trained, feed-forward networks; fuzzy ARTmaps; Kohonen's self-organizing maps; learning vector quantizers; Hamming networks; Boltzmann machines; and Hopfield networks (Keller et al.