multilayer perceptron


Also found in: Acronyms.

multilayer perceptron

A network composed of more than one layer of neurons, with some or all of the outputs of each layer connected to one or more of the inputs of another layer. The first layer is called the input layer, the last one is the output layer, and in between there may be one or more hidden layers.
Mentioned in ?
References in periodicals archive ?
Multilayer Perceptron Networks (MLP Networks) and Radial Basis Function Networks (RBF networks) are the two popular methods of feed forward neural network.
Hence the current study uses Multilayer Perceptron of neural network to explore the relationship between job satisfiers such as pay, promotion, supervisor, benefits, rewards, operating procedure, coworkers, work and communication and work commitmentand also to identify the contribution of each individual job satisfiers towards work commitment which is a novel approach and vital for the current business scenario.
This paper discusses the 12 multiple classifiers: decision trees, random forests, extremely randomized trees, multi-class adaboost, stochastic gradient boosting, support vector machines (including linear and nonlinear), K nearest neighbors, multi-class logistic classifier, multilayer perceptron, naive Bayesian classifier and conditional random fields classifier.
A multilayer perceptron (MLP) is a feed forward artificial neural network model that maps sets of input data onto a set of appropriate outputs.
The findings from this study are consistent with that of Heazlewood and Keshishian (2010), suggesting that neural networks, specifically the multilayer perceptron (MLP) networks, are more effective in predicting group membership, and displayed higher predictive validity when compared to discriminant analysis.
The neural network used in this application has a Multilayer Perceptron architecture consisting of an input layer, a hidden layer and an output layer.
The multilayer Perceptron as an approximation to a Bayes optimal discriminant function," IEEE Trans Neural Networks, TNN-1(4): pp.
Multilayer perceptron architecture, with 15 linear inputs and 3 hidden logistic nodes and one output, being the HIV status or AIDS status, was trained using 200 epochs with a learning rate of 0.
One of the most useful applications of neural networks to data analysis is the multilayer perceptron model (MLP).
Multilayer perceptron (MLP) with one hidden layer containing two MLP(2) or five MLP(5) units with the logistic activation function for the hidden layer and linear activation function for the output layer.
On the capabilities of multilayer perceptrons, Journal of Complexity 3, pp 331-342
Restrictions refer to: the ANN type--total connected feedforward multilayer perceptron network; image database (training set)--25 RGB images, representing 15 diagnostics; ANN architecture--39-X-15 with 3 layers of neurons, where X is the variable number of neurons in the hidden layer; training process--15 cycles with 30.

Full browser ?