cache miss


Also found in: Dictionary, Thesaurus, Medical, Legal, Acronyms.

cache miss

(storage)
A request to read from memory which cannot be satisfied from the cache, for which the main memory has to be consulted.

Opposite: cache hit.

cache miss

A failure to find the required instruction or data in the cache. In such a case, the item is read from main memory. Contrast with cache hit. See cache.
References in periodicals archive ?
Since there is no memory access in this attack, there is no cache miss which makes the previous detection technique almost impossible.
By reformulating the problem into the simultaneous processing of a data and a control stream, cache miss penalties could be significantly reduced.
Due to cache miss, the UIR strategy broadcasts the requested data items only after the next IR, whereas our strategy also broadcasts the requested data items after every UIR (as part of RR).
To hide instruction cache miss latency more effectively in modern microprocessors, we propose and evaluate a new fully automatic instruction prefetching scheme whereby the compiler and the hardware cooperate to launch prefetches earlier (therefore hiding more latency) while at the same time maintaining high coverage and actually reducing the impact of useless prefetches relative to today's schemes.
If we assume a four-word cache block, this code segment will cause a cache miss every fourth iteration.
shows that our approach is better at reducing the L1 cache miss rate than Hashemi et al.'s approach simply because our approach uses more detailed profile information.
do i=l,n,4 temp = temp + a(i) enddo then each read of a(i) causes a cache miss.
On a cache miss, the client sends a page request message to the server.
In this work, the PLR is expressed as the root mean square error, [P.sub.e,rms] of L1, L2 cache miss and deadline miss errors of the system.
The researchers also found that Dense Footprint Cache led to a significant improvement in "last-level cache miss ratios." Last-level cache misses occur when the processor tries to retrieve data from the cache, but the data aren't there, forcing the processor to retrieve the data from off-chip main memory.
In case a cache miss occurs, the original server services the request.
In MLP aware replacement policy, the algorithm computes the MLP-based costing for each cache miss and uses the MLP cost and recency to determine the cache block to be replaced.