memory hierarchy


Also found in: Wikipedia.

memory hierarchy

[′mem·rē ′hī·ər‚är·kē]
(computer science)
A ranking of computer memory devices, with devices having the fastest access time at the top of the hierarchy, and devices with slower access times but larger capacity and lower cost at lower levels.

memory hierarchy

The levels of memory in a computer. From fastest to slowest speed, they are:

1. CPU registers
2. L1 cache
3. L2 cache
4. Main memory
5. Virtual memory
6. Disk
References in periodicals archive ?
KSL's Shaheen II supercomputer is a Cray system composed of 6,174 nodes representing 197,568 processor cores tightly integrated with a richly layered memory hierarchy and interconnection network.
Objective: The ideal processor building block is a power and cost-efficient core that can maximize the extraction of memory hierarchy parallelism, a combination that neither traditional in-order nor out-of-order cores provide.
Even though the specific details may change, it is important to know some rudiments about how cache memory works, how the memory hierarchy is structured, how registers are used in the CPU, and, most recently, how vector registers are different and what you can accomplish with them.
The company is also focusing on throughput with Zen, with 8MB L3 cache and a new cache memory hierarchy.
So the role of memristor as a new memory structures, different than conventional memory hierarchy can be used to get more power efficiency.
His topics include perspectives on parallel programming, parallel programming for linked data structures, memory hierarchy organization, basic cache coherence issues, memory consistency models, and interconnection network architecture.
This approach uniquely takes advantage of innovations and design choices such as memory hierarchy and pipeline configuration which can directly influence the throughput of computational operations.
They also use memory hierarchy to allow processing of a large amount of data with limited on-chip memory.
The architecture of Intel Processors Graphics delivers a full complement of high-throughput floating-point and integer compute capabilities, a layered high-bandwidth memory hierarchy, and deep integration with on-die CPUs.
Equation 4 shows the improvement factor of the memory hierarchy whenever there is cache hit for one level cache memory.
Their topics include memory hierarchy for multicore and many-core processors, the Cilk and Cilk++ programming languages, efficient Aho-Corasick string matching on emerging multicore architectures, sorting on a graphics processing unit (GPU), evaluating multicore processors and accelerators for dense numerical computations, and backprojection algorithms for multicore and GPU architectures.
The Oracle Exadata X3 Database In-Memory Machine implements a mass memory hierarchy that automatically moves all active data into Flash and RAM memory, while keeping less active data on low-cost disks.