caching


Also found in: Dictionary, Thesaurus, Medical, Legal.
Related to caching: Web caching

caching

This article is provided by FOLDOC - Free Online Dictionary of Computing (foldoc.org)

cache

(1) To store data locally in order to speed up subsequent retrievals. Pronounced "cash." See Web cache and browser cache.

(2) Reserved areas of memory (RAM) in every computer that are used to speed up processing. Pronounced "cash," they serve as high-speed staging areas that are constantly filled with the next set of instructions or data. Caches have faster input/output than the areas that feed them. For example, memory caches are high-speed memory, which is faster than main memory, and disk caches are main memory, which is faster than disk.

Memory Caches


A memory cache, also called a "CPU cache," is a memory bank that bridges main memory and the processor. Comprising faster static RAM (SRAM) chips than the dynamic RAM (DRAM) used for main memory, the cache allows instructions to be executed and data to be read and written at higher speed. Instructions and data are transferred from main memory to the cache in fixed blocks, known as cache "lines," using a look-ahead algorithm. See cache line, static RAM and dynamic RAM.

Temporal and Spatial (Time and Space)
Caches take advantage of "temporal locality," whereby unchanging data constants such as high-low limits, messages and column headers are used over and over again. Caches also benefit from "spatial locality," because the next instruction to be executed or the next set of data to be processed is often next in line. The more sequential they are, the greater the chance for a "cache hit." If the next item is not in the cache, a "cache miss" occurs, and it must be retrieved from slower main memory.

Levels 1, 2 and 3 (L1, L2, L3)
Today's CPU chips contain two or three caches, with L1 being the fastest. Each subsequent cache is slower and larger than L1, and instructions and data are staged from main memory to L3 to L2 to L1 to the processor. On multicore chips, the L3 cache is generally shared among all the processing cores. See write-back cache and write-through cache.


Memory Cache Hierarchy
The whole idea is to keep staging more instructions and data in a memory that is closer to the speed of the processor. The caches are generally built into the CPU chip. See L2 cache.







Disk Caches


A disk cache is a dedicated block of memory (RAM) in the computer or in the drive controller that bridges storage and CPU. When the disk or SSD is read, a larger block of data is copied into the cache than is immediately required. If subsequent reads find the data already stored in the cache, there is no need to retrieve it from storage, which is slower to access.

If the cache is used for writing, data are queued up at high speed and then written to storage during idle machine cycles by the caching program or the drive controller. See cache coherency, write-back cache, write-through cache, pipeline burst cache, lookaside cache, inline cache, backside cache and NV cache.


Disk Cache
Disk caches are usually a part of main memory comprising common dynamic RAM (DRAM) chips, whereas memory caches (CPU caches) use higher-speed static RAM (SRAM) chips.
Copyright © 1981-2019 by The Computer Language Company Inc. All Rights reserved. THIS DEFINITION IS FOR PERSONAL USE ONLY. All other reproduction is strictly prohibited without permission from the publisher.
References in periodicals archive ?
A general approach based on bankruptcy game model is proposed for finding the optimal caching policy.
Food hoarding by ravens during the breeding season is poorly documented (Heinrich 1989, 1999; Ratcliffe 1997; Boarman and Heinrich 1999), and direct observation of caching by wild adults near an active nest is rare.
Cache table [4] enables persistent caching of the full or partial contents of the relational table in the distributed environment.
The existing segment based caching approaches deploys a fixed segmentation approach that divides a video into a number of segments of fixed size.
CMP Cooperative Caching was proposed in (Jichuan & Gurindar 2006), a unified framework to manage a CMP's aggregate on-chip cache resources.
That was followed by Samsung's buyout of Nvelofor its Dataplex SSD caching software.
This paper designs a two-level cache based on user query logs and integrates it into a distributed caching system.
The new Synapse SSDs are optimized for caching applications and leverages Dataplex(TM) cache software to dynamically manage the Synapse SSD in conjunction with standard hard disk drives (HDDs), to provide users with SSD-level performance across the entire capacity of the HDD.
Wateen Selects MARA Systems for Caching Solutions Solution would improve customer's quality of experience and provide bandwidth savings for Wateen
"Caching Food for Times of Famine" is an exercise presented under a behavior unit of the Biology in a Box project, which provides grade-level-appropriate exercises and permanent materials to school systems throughout Tennessee and, recently, in some school systems in neighboring states.