Hebbian learning

Hebbian learning

(artificial intelligence)
The most common way to train a neural network; a kind of unsupervised learning; named after canadian neuropsychologist, Donald O. Hebb.

The algorithm is based on Hebb's Postulate, which states that where one cell's firing repeatedly contributes to the firing of another cell, the magnitude of this contribution will tend to increase gradually with time. This means that what may start as little more than a coincidental relationship between the firing of two nearby neurons becomes strongly causal.

Despite limitations with Hebbian learning, e.g., the inability to learn certain patterns, variations such as Signal Hebbian Learning and Differential Hebbian Learning are still used.

http://neuron-ai.tuke.sk/NCS/VOL1/P3_html/node14.html.
Mentioned in ?
References in periodicals archive ?
If the user responds similarly, then neurons representing BabyX's actions begin to associate with neurons responding to the user's action through a process called Hebbian learning.
In an approach to an answer, Arabi discussed concepts such as Hebbian learning (neurons that fire together wire together, neurons that fire out of sync lose their link) and spike timing dependent plasticity.
Keysers and Perrett (2004) suggest that mirror neurons work according to a Hebbian learning model.
Groumpos, Active Hebbian learning algorithmto train fuzzy cognitive maps, International Journal of Approximate Reasoning, 2004, 37:219-249.
A set of patterns are "imprinted" onto the network through Hebbian learning, and by activating a portion of the same pattern later, the full original pattern will emerge after a short number of cycles.
Hebbian learning of cognitive control: dealing with specific and nonspecific adaptation.
A Framework for Mesencephalic Dopamine Systems Based on Predictive Hebbian Learning.
Understanding failures of learning: Hebbian learning, competition for representational space, and some preliminary experimental data.
The model assumes that (1) the central nervous system probabilistically interprets proprioceptive information in real time to generate motor output, (2) sensorimotor pathways become more reliable with repetitive activation in a sort of Hebbian learning, and (3) normal sensory input sometimes elicits abnormal motor output following neurological injury because of disrupted neural organization.
All links were then strengthened using the Hebbian learning function.
Hebb's learning principle is called the Hebbian learning rule when implemented mathematically in PDP simulations.