direct mapped cache
Also found in: Acronyms.
direct mapped cache
(architecture)A cache where the cache location for a given
address is determined from the middle address bits. If the
cache line size is 2^n then the bottom n address bits
correspond to an offset within a cache entry. If the cache
can hold 2^m entries then the next m address bits give the
cache location. The remaining top address bits are stored as
a "tag" along with the entry.
In this scheme, there is no choice of which block to flush on a cache miss since there is only one place for any block to go. This simple scheme has the disadvantage that if the program alternately accesses different addresses which map to the same cache location then it will suffer a cache miss on every access to these locations. This kind of cache conflict is quite likely on a multi-processor. See also fully associative cache, set associative cache.
In this scheme, there is no choice of which block to flush on a cache miss since there is only one place for any block to go. This simple scheme has the disadvantage that if the program alternately accesses different addresses which map to the same cache location then it will suffer a cache miss on every access to these locations. This kind of cache conflict is quite likely on a multi-processor. See also fully associative cache, set associative cache.
This article is provided by FOLDOC - Free Online Dictionary of Computing (foldoc.org)