Also found in: Dictionary, Thesaurus, Wikipedia.
computer memory[kəm′pyüd·ər ′mem·rē]
(or storage), the devices and processes that provide for the recording, storage, and retrieval of data in an electronic computer. The memory is a basic part of any computing system or individual computer. Its realization involves both hardware—a set of interconnected memory units —and software. The capacity, or the maximum amount of information a computer memory can store, is determined by the total capacity of all the memory units. The speed of the memory depends on the speed of the individual memory units, on the principles by which the units are organized into a single memory system, and on the procedures for information exchange within this system. As computer memory capacity increases, the speed is ordinarily reduced, because both the time required to search for necessary information in large arrays of data and the time required for pulses to traverse the electrical circuits increase.
The memory of a modern computer is constructed in the form of a multilevel hierarchical system, which provides an economically sound resolution of the conflicting requirements of large capacity and high speed. The usual components of the hierarchy are as follows: (1) In high-capacity external storage, which can contain hundreds of millions of words, data arrays are stored on magnetic tape. (2) There follows a level of external storage with smaller capacity and greater speed, where data are stored on magnetic drums and magnetic disks. (3) Internal, or working, storage, which in third-generation computers is more often called the main memory, has capacities of up to hundreds of thousands or millions of words and a memory cycle time ranging from tenths of a microsecond to several microseconds. It should be noted that the speed of the working memory of a processor must be commensurate with the operating speed of the processor because the performance of an arithmetic or logical operation is heavily dependent on the working memory—information is extracted from there and the results of the operation are written there. (4) Fast-access storage, which consists of the most frequently used cells of the working storage, has a capacity of several tens or hundreds of words and a memory cycle time ranging from hundredths to tenths of a microsecond. (5) Registers are memory units with capacities of one word in different units of the processor. (6) Permanent, or read-only, storage is for storing tabular data, coefficients, subroutines, and microprograms. (7) Buffer storage is an intermediate link in the exchange of information between memory units on different storage levels.
The computing process can be considerably accelerated by reducing the number of references to the main memory through the use of magazine (modular stack) memory, which is a set of individual vocabulary registers in which identically named positions are interconnected by a shift circuit. The use of magazine memory also leads to a reduction in the space assigned in the main memory for the storage of programs and obviates the need to store the content of the registers in the main memory upon transfers to subroutines or when the program is interrupted by outside signals.
Since high-capacity modern computers work in a multiprogramming mode, where they execute several programs simultaneously, the question of the organization of information exchange between the external and working memories is of exceptional importance. In systems with simple exchange, just one program or part thereof is in the working memory at each given moment, whereas in systems with allocation of the working memory, several target programs or parts of target programs can be in the working memory at the same time. In this case, it is not necessary to carry out an exchange each time the processing of a target program is completed, because the other target programs or parts of target programs are already in the computer memory and are ready for processing.
Storage allocation is the process of placing information (blocks of data or instructions) in memory units of different levels so as to achieve maximally efficient utilization of the entire capacity of the computer memory, rational organization of the computing process, and reduction of problem-solving time. Static storage allocation is done by the programmer when he analyzes the problem and draws up the program—that is, before solution of the problem begins. This kind of allocation, however, substantially complicates the programmer’s work, for in the process of programming he must constantly keep track of where necessary information is located at a given stage, which memory cells and memory fields are occupied or free, and so on. In the case of multiprogramming, static memory allocation is impracticable because the programmer cannot foresee all the situations that can arise when several problems are solved simultaneously. Storage allocation must therefore be done in the computer automatically, during program execution. This method is called dynamic storage allocation. To avoid accidental intrusion by the program of one problem into areas of storage occupied by information related to another problem, provision is made for memory protection, which automatically interrupts a program that attempts to use prohibited areas of storage. In dynamic storage allocation, the internal exchange of information between working and external storage can be organized in such a way that the user, or programmer, appears to have at his disposal a single working memory of a very large capacity, which is limited only by the place of the address in the command. Such storage is said to be virtual, since at any given moment in time only a small part of the information contained in virtual storage is physically in the working memory unit.
The ways of finding information in storage are used: addressed search, which is based on the memory cell number, and associative search, which is based on the content of the information. The types of addressing include implicit address, where the command does not give the address of the operand since the address is implicit in the operation code of the command; immediate address, where the command contains the operand itself rather than the address of the operand; direct address, where the effective address is contained in the command; relative address, where the address is formed by summing the address part of the command and the content of an index register; and indirect address, where the command gives the address (number) of a memory cell that contains the address of the operand. Associative search methods are used in associative memory units. A further development of associative memory units is multifunctional memory units, in which not only comparisons, as in simple associative memory units, but also certain logical and arithmetic information processing functions are performed.
REFERENCESAssotsiativnye zapominaiushchie ustroistva. Edited by L. P. Kraizmer. Leningrad, 1967.
Kraizmer, L. P. Ustroistva khraneniia diskretnoi informatsii, 2nd ed. Leningrad, 1969.
Kraizmer, L. P., S. A. Matiukhin, and S. G. Maiorkin. Pamiat’kiberneti-cheskikh sistem (Osnovy mnemologii). Moscow, 1971.
Balashov, E. P., and A. I. Knol’. Mnogofunktsional’nye zapominaiushchie ustroistva. Leningrad, 1972.
Kagan, B. M., and M. M. Kanevskii. Tsifrovye vychislitel’nye mashiny i sistemy, 2nd ed. Moscow, 1973.
A. V. GUSEV and L. P. KRAIZMER