direct mapped cache


Also found in: Acronyms.

direct mapped cache

(architecture)
A cache where the cache location for a given address is determined from the middle address bits. If the cache line size is 2^n then the bottom n address bits correspond to an offset within a cache entry. If the cache can hold 2^m entries then the next m address bits give the cache location. The remaining top address bits are stored as a "tag" along with the entry.

In this scheme, there is no choice of which block to flush on a cache miss since there is only one place for any block to go. This simple scheme has the disadvantage that if the program alternately accesses different addresses which map to the same cache location then it will suffer a cache miss on every access to these locations. This kind of cache conflict is quite likely on a multi-processor. See also fully associative cache, set associative cache.
References in periodicals archive ?
Also, we admit that we use a direct mapped cache, and we adopt write-through as writing policy.
A direct mapped cache specifies that if the information is in cache, there is one location where you can find that information.
The cache model simulated a direct mapped cache similar to the L1 cache of our Sun machine.

Full browser ?