back-propagation

(redirected from backpropagation)
Also found in: Dictionary, Wikipedia.

back-propagation

(Or "backpropagation") A learning algorithm for modifying a feed-forward neural network which minimises a continuous "error function" or "objective function." Back-propagation is a "gradient descent" method of training in that it uses gradient information to modify the network weights to decrease the value of the error function on subsequent tests of the inputs. Other gradient-based methods from numerical analysis can be used to train networks more efficiently.

Back-propagation makes use of a mathematical trick when the network is simulated on a digital computer, yielding in just two traversals of the network (once forward, and once back) both the difference between the desired and actual output, and the derivatives of this difference with respect to the connection weights.
References in periodicals archive ?
Learning perceptron neural network with backpropagation algorithm, Economic Computation and Economic Cybernetics Studies and Research 44(4): 37-54.
2]s]) y mole fraction k Henry's constant Subscripts/Suerscripts i gas numbers 1, 2 and 3 j zone A, B gases A and B f property in feed side m property in membrane p property in permeate side o property in inlet stream outlet property in outlet stream k stage number of repetitive procedure Abbreviations ANN Artificial neural network BFM Bubble flow meter BP Backpropagation BR Bayesian regularization CFD Computational fluid dynamics PDMS Polydirnethylsiloxane MFNN Multilayer feed-forward neural network MSE Mean squared error NW Nguyen-Widrow RMSE Root mean squared error
Besides, the 6-18-3-1 network trained by the feed forward backpropagation method was introduced as a capable tool in forecasting of the optimal length of a rock bolt.
The ANN trained by the backpropagation algorithm was able to learn the correlation between the penetration resistance with the soil bulk density and the water content.
Backpropagation is one of the most popular learning algorithms in neural networks and is derived to minimize the error using the following formula:
14] Yu Xiaohu, Chert Guoan, Cheng Shixin (1995) Dynamic Learning Rate Optimization of the Backpropagation Algorithm, IEEE Transactions on Neural Networks, 6(3) pp.
The validation in [l0] was done by employing an error backpropagation neural network as classifier.
They are layered feedforward networks training with the algorithm of backpropagation (Rumelhart et al.
All trials in the training set were split such that only the first 80 percent of each trial was presented to the backpropagation algorithm.
To train the network, the backpropagation method is used, but to shorten time, required for training, a second order method called "stochastic diagonal Levenberg Marquardt" [11] is used.
Full learning with neural network is done through the use of backpropagation method with adaptive learning rate and start-up function, which we used in the software package Matlab 2008b and called trainbpx (Charalambous, 1992).
The most common adaptation algorithm of multilayer neural networks is the backpropagation method, which allows adaptation over the neural network training set.