back-propagation


Also found in: Dictionary.

back-propagation

(Or "backpropagation") A learning algorithm for modifying a feed-forward neural network which minimises a continuous "error function" or "objective function." Back-propagation is a "gradient descent" method of training in that it uses gradient information to modify the network weights to decrease the value of the error function on subsequent tests of the inputs. Other gradient-based methods from numerical analysis can be used to train networks more efficiently.

Back-propagation makes use of a mathematical trick when the network is simulated on a digital computer, yielding in just two traversals of the network (once forward, and once back) both the difference between the desired and actual output, and the derivatives of this difference with respect to the connection weights.
References in periodicals archive ?
Back-propagation training algorithm [5, 19, and 20] is different from other algorithms in terms of the weight updating strategies.
In current assay, an artificial neural network modeling has been applied to determine the best coagulant rate in the water treatment plant according to a back-propagation training method.
This study evaluates the level of sustainable development for railway transportation based on back-propagation (BP) neural network.
Back-propagation Neural Network Adaptive Control of a Continuous Wastewater Treatment Process, Ind.
In particular, this research will apply a back-propagation (BP) NN model to predict design cost estimates for freeway pavement construction projects, using historical data on freeway construction projects in Henan Province as a case study of the application of the approach.
Back-Propagation Neural Network represents an important technique to define nonlinear transfer functions between continuous input values and one or more output values.
For the development process it was selected a classification Multilayer Perceptron using a back-propagation algorithm.
Section 2 describes the non-parametric modeling approach adopted here as per MLFF neural network with back-propagation algorithm and GMDH neural network with genetic algorithms which are briefly discussed.
Due to the high efficiency, a Back-Propagation NN, with one (or more) sigmoid-type hidden layer(s) and a linear output layer can approximate any arbitrary (linear or nonlinear) function [8].
An MLP is an artificial neural network with the ability to map sets of input data to a set of desired output values, after being trained by a supervised learning technique known as a back-propagation algorithm [18].
By continuously adjusting the weight values and threshold values of the network via the back-propagation, BP neural network minimizes the square sum of error.