The prediction was done using Multilayer Perceptron
Neural Networks (MLPNN).
A Multilayer Perceptron
or MLP model is made up of a layer N of input neurons, a layer M of output neurons and one or more hidden layers; although it has been shown that for most problems it would be enough to have only one layer L of hidden neurons (Hornik, Stinchcombe, & White, 1989) (see Figure 3A).
Al observar los modelos de clasificacion, el que arrojo menos falsos positivos fue el Multilayer Perceptron
. sin embargo, aunque mejora el desempeno de clasificacion de falsos positivos, al utilizar matrices de costo adecuadas se ve afectada la tasa de error
R., Steven K.R., Matthew K., Mark E.O., and Bruce W.S., "The multilayer Perceptron
as an approximation to a Bayes optimal discriminant function," IEEE Trans Neural Networks, TNN-1(4): pp.
architecture, with 15 linear inputs and 3 hidden logistic nodes and one output, being the HIV status or AIDS status, was trained using 200 epochs with a learning rate of 0.1 and momentum of 0.1.
Figure 2 presents a multilayer perceptron
with multiple inputs and outputs.
Keywords: decision tree, lazy classifier, multilayer perceptron
, K-means, hierarchical clustering
With the exception of very few reports, particular data of model evaluation criteria have not yet been keynoted (Grzesiak and Zaborski, 2012; Ali et al., 2015) and so, in the predictive accuracy, GLM (General Linear Model), CART (Classification and Tree), CHAID (Chi-square Automatic Interaction Detector), Exhaustive CHAID, and MLP (Multilayer Perceptron
), one of ANNs types were compared for the prediction procedure with a great number of model assessment criteria submitted in materials and methods section.
Chen, "Multilayer perceptron
for prediction of 2006 world cup football game," Advances in Artificial Neural Systems, vol.
Probably the most used type of neural network is the Multilayer Perceptron
. In this type of network the neurons perform a weighted sum of their inputs and pass this value to the transfer function to produce the output.
Among the types are Multilayer Perceptron
for predictions and classification, Self-Organizing Feature Maps for unsupervised clustering, and Recurrent Networks for understanding and forecasting time-series.
Based on the criteria of Root Mean Square Error, Maximum Absolute Error, and the value of the objective function, it is found that Multilayer Perceptron
models with logistic activation functions predict daily stock returns better than traditional Ordinary Least Squares and General Linear Regression models.