neural network(redirected from Massive neural network)
Also found in: Dictionary, Thesaurus, Medical.
device capable of performing a series of arithmetic or logical operations. A computer is distinguished from a calculating machine, such as an electronic calculator, by being able to store a computer program (so that it can repeat its operations and make logical
..... Click the link for more information. architecture modeled upon the human brainbrain,
the supervisory center of the nervous system in all vertebrates. It also serves as the site of emotions, memory, self-awareness, and thought. Anatomy and Function
..... Click the link for more information. 's interconnected system of neurons. Neural networks imitate the brain's ability to sort out patterns and learn from trial and error, discerning and extracting the relationships that underlie the data with which it is presented. Most neural networks are software simulations run on conventional computers. In neural computers, transistortransistor,
three-terminal, solid-state electronic device used for amplification and switching. It is the solid-state analog to the triode electron tube; the transistor has replaced the electron tube for virtually all common applications.
..... Click the link for more information. circuits serve as the neurons and variable resistorsresistor,
two-terminal electric circuit component that offers opposition to an electric current. Resistors are normally designed and operated so that, with varying levels of current, variations of their resistance values are negligible (see resistance).
..... Click the link for more information. act as the interconnection between axons and dendrites (see nervous systemnervous system,
network of specialized tissue that controls actions and reactions of the body and its adjustment to the environment. Virtually all members of the animal kingdom have at least a rudimentary nervous system.
..... Click the link for more information. ). A neural network on an integrated circuitintegrated circuit
(IC), electronic circuit built on a semiconductor substrate, usually one of single-crystal silicon. The circuit, often called a chip, is packaged in a hermetically sealed case or a nonhermetic plastic capsule, with leads extending from it for input, output,
..... Click the link for more information. , with 1,024 silicon "neurons," has also been developed. Each neuron in the network has one or more inputs and produces an output; each input has a weighting factor, which modifies the value entering the neuron. The neuron mathematically manipulates the inputs, and outputs the result. The neural network is simply neurons joined together, with the output from one neuron becoming input to others until the final output is reached. The network learns when examples (with known results) are presented to it; the weighting factors are adjusted—either through human intervention or by a programmed algorithm—to bring the final output closer to the known result.
Neural networks are good at providing very fast, very close approximations of the correct answer. Although they are not as well suited as conventional computers for performing mathematical calculations or moving and comparing alphabetic characters, neural networks excel at recognizing shapes or patterns, learning from experience, or sorting relevant data from irrelevant. Their applications can be categorized into classification, recognition and identification, assessment, monitoring and control, and forecasting and prediction. Among the tasks for which they are well suited are handwriting recognition, foreign language translation, process control, financial forecasting, medical data interpretation, artificial intelligenceartificial intelligence
(AI), the use of computers to model the behavioral aspects of human reasoning and learning. Research in AI is concentrated in some half-dozen areas.
..... Click the link for more information. research, and parallel processingparallel processing,
the concurrent or simultaneous execution of two or more parts of a single computer program, at speeds far exceeding those of a conventional computer.
..... Click the link for more information. implementations of conventional processing tasks. In an ironic reversal, neural networks are being used to model disorders of the brain in an effort to discover better therapeutic strategies.
See Y. Burnod, An Adaptive Neural Network: The Cerebral Cortex (1990); J. S. Judd, Neural Network Design and the Complexity of Learning (1990); S. I. Gallant, Neural Network Learning and Expert Systems (1993); L. Medsker, Hybrid Neural Network and Expert Systems (1994); R. L. Harvey, Neural Network Principles (1994).
neural network[′nu̇r·əl ′net‚wərk]
An information-processing device that consists of a large number of simple nonlinear processing modules, connected by elements that have information storage and programming functions. The field of neural networks is an emerging technology in the area of machine information processing and decision making. The main thrusts are toward highly innovative machine and algorithmic architectures, radically different from those that have been employed in conventional digital computers. The information-processing elements and components of neural networks, inspired by neuroscientific studies of the structure and function of the human brain, are conceptually simple. Three broad categories of neural-network architectures have been formulated which exhibit highly complex information-processing capabilities. Several generic models have been advanced which offer distinct advantages over traditional digital-computer implementation. Neural networks have created an unusual amount of interest in the engineering and industrial communities by opening up new research directions and commercial and military applications.
Automated information processing is achieved by means of modules that in general involve four functions: input/output (getting in and out of the machine), processing (executing prescribed specific information-handling tasks), memory (storing information), and connections between different modules providing for information flow and control. Neural networks contain a very large number of simple processing modules. This contrasts with traditional digital computers, which contain a small number of complex processing modules that are rather sophisticated in the sense that they are capable of executing very large sets of prescribed arithmetic and logical tasks (instructions). In conventional digital computers, the four functions listed above are carried out by separate dedicated machine units. In neural networks information storage is achieved by components which at the same time effect connections between distinct machine units. These key distinctions between the neural-network and the digital computer architectures are of a fundamental nature and have major implications in machine design and in machine utilization.
The information-processing properties of neural networks depend mainly on two factors: the network topology (the scheme used to connect elements or nodes together), and the algorithm (the rules) employed to specify the values of the weights connecting the nodes. While the ultimate configuration and parameter values are problem-specific, it is possible to classify neural networks, on the basis of how information is stored or retrieved, in four broad categories: neural networks behaving as learning machines with a teacher; neural networks behaving as learning machines without a teacher; neural networks behaving as associative memories; and neural networks that contain analog as well as digital devices and result in hybrid-machine implementations that integrate complex continuous dynamic processing and logical functions. Within these four categories, several generic models have found important applications, and still others are under intensive investigation.
Neural-network research is developing a new conceptual framework for representing and utilizing information, which will result in a significant advance in information epistemology. Communication technology is based on the notions of coding and channel capacity (bits per second), which provide the conceptual framework for information representation appropriate to machine-based communication. Neural-network systems (biological or artificial) do not store information or process it in the way that conventional digital computers do. Specifically, the basic unit of neural-network operation is not based on the notion of the instruction but on the connection. The performance of a neural network depends directly on the number of connections per second that it effects, and thus its performance is better understood in terms of its connections-per-second (CPS) capability. See Information theory