distributed memory


Also found in: Acronyms, Wikipedia.

distributed memory

(architecture)
The kind of memory in a parallel processor where each processor has fast access to its own local memory and where to access another processor's memory it must send a message via the inter-processor network.

Opposite: shared memory.
References in periodicals archive ?
They present an approach to robot localization and navigation that uses visual memories stored into a kind of associative memory called sparse distributed memory.
Germany and China--employ more than 100,000 processor cores each, multiple processor types in most cases, and distributed memory architectures.
Also distributed memory machines often do not possess enough memory on a processing node to hold all of the data for a large job.
Such distributed memory processes add overhead in terms of time, cost and complexity, and have the potential to corrupt the integrity of the results.
It has features that simplify development of large or complex applications and offers unrivaled support for applications that use message passing, multithreading, OpenMP and other parallel programming techniques that enable applications on both distributed memory and shared memory multiprocessor computers.
The network processor technology combines multiple processing engines, software pipelining techniques and distributed memory caching mechanisms, eliminating the need for specialised co-processors for determining the flow of data packets into and out of a network processor which subsequently increases performance.
Now with the availability of the Distributed Memory Parallel version of PAM-FLOW, this level performance is available for most UNIX-based design environments.
Further out IBM will offer LA ccNUMA as a way of providing conventional shared memory interface, which is more familiar to developers, on a distributed memory or message passing system, with no special hardware support.
This trend, coupled with physically distributed memory architectures, is leading to very nonuniform memory access times, with latencies ranging from a couple of processor cycles for data in cache to hundreds of thousands of cycles.
The stream manager on the network interface is responsible for efficient data movement of multiple streams from the interconnect of the distributed memory to the network.
But the notion of a widely distributed memory system, he says, takes into account the fact that messages in the central nervous system do not travel a straight path; each neuron typically receives information from many surrounding neurons and transmits messages to many other neurons, including some or all of those from which it receives input.
Designed to simplify and speed the identification and correction of errors in sophisticated distributed memory sharing applications, the PEAZ-5565 complements GE's extensive range of Reflective Memory solutions.

Full browser ?