memory hierarchy


Also found in: Wikipedia.

memory hierarchy

[′mem·rē ′hī·ər‚är·kē]
(computer science)
A ranking of computer memory devices, with devices having the fastest access time at the top of the hierarchy, and devices with slower access times but larger capacity and lower cost at lower levels.

memory hierarchy

The levels of memory in a computer. From fastest to slowest speed, they are:

1. CPU registers
2. L1 cache
3. L2 cache
4. Main memory
5. Virtual memory
6. Disk
References in periodicals archive ?
Since performance in current multicores is limited by their power budget, it is imperative to make multicores as energy-efficient as possible to increase performance even further.in a multicore architecture, the cache coherence protocol is an essential component since its unique but challenging role is to offer a simple and unified view of the memory hierarchy. This project envisions that extending the role of the coherence protocol to simplify other system components will be the key to overcome the performance and energy limitations of current multicores.
KSL's Shaheen II supercomputer is a Cray system composed of 6,174 nodes representing 197,568 processor cores tightly integrated with a richly layered memory hierarchy and interconnection network.
In what concerns the memory hierarchy, each streaming multiprocessor (SM) contains a set of 32-bit register memory, together with a shared memory area that can be easily accessed by every core of the streaming multiprocessor, but it is not visible to the other streaming multiprocessors.
Even though the specific details may change, it is important to know some rudiments about how cache memory works, how the memory hierarchy is structured, how registers are used in the CPU, and, most recently, how vector registers are different and what you can accomplish with them.
The company is also focusing on throughput with Zen, with 8MB L3 cache and a new cache memory hierarchy.
So the role of memristor as a new memory structures, different than conventional memory hierarchy can be used to get more power efficiency.
His topics include perspectives on parallel programming, parallel programming for linked data structures, memory hierarchy organization, basic cache coherence issues, memory consistency models, and interconnection network architecture.
This approach uniquely takes advantage of innovations and design choices such as memory hierarchy and pipeline configuration which can directly influence the throughput of computational operations.
They also use memory hierarchy to allow processing of a large amount of data with limited on-chip memory.
The architecture of Intel Processors Graphics delivers a full complement of high-throughput floating-point and integer compute capabilities, a layered high-bandwidth memory hierarchy, and deep integration with on-die CPUs.
Equation 4 shows the improvement factor of the memory hierarchy whenever there is cache hit for one level cache memory.
Their topics include memory hierarchy for multicore and many-core processors, the Cilk and Cilk++ programming languages, efficient Aho-Corasick string matching on emerging multicore architectures, sorting on a graphics processing unit (GPU), evaluating multicore processors and accelerators for dense numerical computations, and backprojection algorithms for multicore and GPU architectures.