cache


Also found in: Dictionary, Thesaurus, Medical, Legal, Financial, Acronyms, Wikipedia.
Related to cache: Google cache

cache

Computing a small high-speed memory that improves computer performance

cache

[kash]
(computer science)
A small, fast storage buffer integrated in the central processing unit of some large computers.

cache

(memory management)
/kash/ A small fast memory holding recently accessed data, designed to speed up subsequent access to the same data. Most often applied to processor-memory access but also used for a local copy of data accessible over a network etc.

When data is read from, or written to, main memory a copy is also saved in the cache, along with the associated main memory address. The cache monitors addresses of subsequent reads to see if the required data is already in the cache. If it is (a cache hit) then it is returned immediately and the main memory read is aborted (or not started). If the data is not cached (a cache miss) then it is fetched from main memory and also saved in the cache.

The cache is built from faster memory chips than main memory so a cache hit takes much less time to complete than a normal memory access. The cache may be located on the same integrated circuit as the CPU, in order to further reduce the access time. In this case it is often known as primary cache since there may be a larger, slower secondary cache outside the CPU chip.

The most important characteristic of a cache is its hit rate - the fraction of all memory accesses which are satisfied from the cache. This in turn depends on the cache design but mostly on its size relative to the main memory. The size is limited by the cost of fast memory chips.

The hit rate also depends on the access pattern of the particular program being run (the sequence of addresses being read and written). Caches rely on two properties of the access patterns of most programs: temporal locality - if something is accessed once, it is likely to be accessed again soon, and spatial locality - if one memory location is accessed then nearby memory locations are also likely to be accessed. In order to exploit spatial locality, caches often operate on several words at a time, a "cache line" or "cache block". Main memory reads and writes are whole cache lines.

When the processor wants to write to main memory, the data is first written to the cache on the assumption that the processor will probably read it again soon. Various different policies are used. In a write-through cache, data is written to main memory at the same time as it is cached. In a write-back cache it is only written to main memory when it is forced out of the cache.

If all accesses were writes then, with a write-through policy, every write to the cache would necessitate a main memory write, thus slowing the system down to main memory speed. However, statistically, most accesses are reads and most of these will be satisfied from the cache. Write-through is simpler than write-back because an entry that is to be replaced can just be overwritten in the cache as it will already have been copied to main memory whereas write-back requires the cache to initiate a main memory write of the flushed entry followed (for a processor read) by a main memory read. However, write-back is more efficient because an entry may be written many times in the cache without a main memory access.

When the cache is full and it is desired to cache another line of data then a cache entry is selected to be written back to main memory or "flushed". The new line is then put in its place. Which entry is chosen to be flushed is determined by a "replacement algorithm".

Some processors have separate instruction and data caches. Both can be active at the same time, allowing an instruction fetch to overlap with a data read or write. This separation also avoids the possibility of bad cache conflict between say the instructions in a loop and some data in an array which is accessed by that loop.

See also direct mapped cache, fully associative cache, sector mapping, set associative cache.

cache

(1) To store data locally in order to speed up subsequent retrievals. Pronounced "cash." See Web cache and browser cache.

(2) Reserved areas of memory (RAM) in every computer that are used to speed up processing. Pronounced "cash," they serve as high-speed staging areas that are constantly filled with the next set of instructions or data. Caches have faster input/output than the areas that feed them. For example, memory caches are high-speed memory, which is faster than main memory, and disk caches are main memory, which is faster than disk.

Memory Caches


A memory cache, also called a "CPU cache," is a memory bank that bridges main memory and the processor. Comprising faster static RAM (SRAM) chips than the dynamic RAM (DRAM) used for main memory, the cache allows instructions to be executed and data to be read and written at higher speed. Instructions and data are transferred from main memory to the cache in fixed blocks, known as cache "lines," using a look-ahead algorithm. See cache line, static RAM and dynamic RAM.

Temporal and Spatial (Time and Space)
Caches take advantage of "temporal locality," whereby unchanging data constants such as high-low limits, messages and column headers are used over and over again. Caches also benefit from "spatial locality," because the next instruction to be executed or the next set of data to be processed is often next in line. The more sequential they are, the greater the chance for a "cache hit." If the next item is not in the cache, a "cache miss" occurs, and it must be retrieved from slower main memory.

Levels 1, 2 and 3 (L1, L2, L3)
Today's CPU chips contain two or three caches, with L1 being the fastest. Each subsequent cache is slower and larger than L1, and instructions and data are staged from main memory to L3 to L2 to L1 to the processor. On multicore chips, the L3 cache is generally shared among all the processing cores. See write-back cache and write-through cache.


Memory Cache Hierarchy
The whole idea is to keep staging more instructions and data in a memory that is closer to the speed of the processor. The caches are generally built into the CPU chip. See L2 cache.







Disk Caches


A disk cache is a dedicated block of memory (RAM) in the computer or in the drive controller that bridges storage and CPU. When the disk or SSD is read, a larger block of data is copied into the cache than is immediately required. If subsequent reads find the data already stored in the cache, there is no need to retrieve it from storage, which is slower to access.

If the cache is used for writing, data are queued up at high speed and then written to storage during idle machine cycles by the caching program or the drive controller. See cache coherency, write-back cache, write-through cache, pipeline burst cache, lookaside cache, inline cache, backside cache and NV cache.


Disk Cache
Disk caches are usually a part of main memory comprising common dynamic RAM (DRAM) chips, whereas memory caches (CPU caches) use higher-speed static RAM (SRAM) chips.
References in periodicals archive ?
The notations and the descriptions for the bankruptcy model and cache space allocation approach are shown in Table 1.
Our observations suggest that caches may be used as sources of food for the adult ravens while incubating eggs, not just as a food supply for nestlings.
Fig 5&6 shows the simulation of the design using when 64K byte size with 2 way set associative cache is chosen and the processor requested for read a location address (re = 1), while Miss occurs (hit0 & hit1 = 0) and (dirty0 & dirty1= 0) with (LRU = 1) so the cache controller is stalled the processor for (8 clock cycles) until storing the previous data line which has the same index value and reads the requested address from the main memory to the Way1 of cache memory.
In Section 3, a description of asynchronous adaptive delay tolerant index cache is presented.
The National |Trust has organised caches or you can organise your own and log onto one of the many websites for help
The remaining part of this paper is arranged as follows: section 2 contains reviews of some related works on cache replacement policies.
In time, the term geocaching was coined, in part due to negative associations with the word stash and in part due to the more romantic connotations of the word cache, which conjures images of treasure hunts.
With Intel having the ability to produce processors with larger die-sizes which can accommodate a larger L4 cache and Intel slide leaks showing presence of an L4 cache module, it remains to be seen if this will indeed be implemented in Crystal Well.
Intel today released a version of its SSD-based Cache Acceleration Software (CAS) for Linux servers, which, the firm said, can offer up to 18 times the performance for read-intensive applications, such as on-line transaction processing systems.
This paper proposes a two-level cache structure based on query analysis of user logs, so that search engines can improve the performance of distributed full-text retrieval.
2004: The Corps of Engineers, Ducks Unlimited and the Arkansas Game and Fish Commission develop a plan to restore the tower Cache River.