# digital computer

Also found in: Dictionary, Thesaurus, Medical, Financial, Acronyms, Wikipedia.
Related to digital computer: Hybrid computer

## digital computer:

see computercomputer,
device capable of performing a series of arithmetic or logical operations. A computer is distinguished from a calculating machine, such as an electronic calculator, by being able to store a computer program (so that it can repeat its operations and make logical
.
The following article is from The Great Soviet Encyclopedia (1979). It might be outdated or ideologically biased.

## Digital Computer

a computer that transforms quantities expressed in the form of sets of digits—that is, in the form of numbers. The simplest transformations of numbers are the arithmetic operations of addition and subtraction, which have been known since antiquity. Arithmetic operations, however, are only a special case of the transformation of numbers given in digital form, and in a modern digital computer they constitute only a small part of the set of operations that the machine can perform on numbers.

The first device for simple computations was the abacus, which was used to carry out the arithmetic operations of addition and subtraction (seeCOMPUTER TECHNOLOGY). In ancient times paper (or its equivalent) and writing instruments were rare, and the abacus freed man from the need to remember the addition table and to write down intermediate results of computations.

The first calculating machines performed such elementary operations as addition, subtraction, carrying to the next higher place in addition, borrowing from the next higher place in subtraction, and shift of the carriage (by hand or, in electric machines, automatically). Multiplication and division were done by successive addition and subtraction, respectively. The functions of man and machine in the performance of computations were divided in the following manner: the machine did the arithmetic operations on numbers; the operator directed the course of the computational process, entered numbers in the machine, recorded intermediate and final results, and looked up in tables the values of various functions involved in the computations. With such a distribution of roles, increases in the speed at which the machine performed the arithmetic operations produced insignificant increases in the overall speed at which computations were carried out, since the procedures for which the operator was responsible constituted the larger part of the computational process. Therefore, although electric calculating machines could in principle perform up to 1,000 arithmetic operations per hour, in practice not more than 1,000 operations could be done in an eight-hour workday.

A step forward in the development of digital calculating machines was the invention of punch-card equipment. In punch-card calculating machines all “human” functions, except the finding of values of functions in tables, were essentially entrusted to the machine. An additional exception was the entry of data into the machine: the input data had to be recorded on punch cards. This operation was done by a person separately on a special device. The prepared punch cards were placed in the machine, which read the data contained on them and performed the necessary computational operations without human intervention. The intermediate results of computations were recorded in memory registers, and the final results were printed on paper or were entered on punch cards, from which a special device copied them on paper. Control of the computational process, that is, the programming of the machine, was handled through plugboard wiring. Thus, punch-card machines contained in embryonic form all the most important elements of an automatic digital computer, whose operation does not require human participation after the necessary preparation for the performance of the computations is completed.

Punch-card calculating machines had an arithmetic unit; a memory (in the form of decks of cards and registers for storing intermediate results); an input device, which read the data on the punch cards; and an output device. Their speed was limited, since they performed arithmetic operations in much the same way as adding machines—that is, through mechanical movements. The most serious bottleneck in punch-card machines, however, was the problem of control of the computational process. Since such control, that is, the specification of the sequence of elementary operations, was effected by using wires to connect different terminals on the plugboard in the appropriate way, only simple sequences of computational operations could be programmed. Because the operations could be repeated many times, punch-card machines were used particularly widely in cases where the solution of a problem reduced to the repetition of simple sets of operations, for example, in bookkeeping and in simple problems of statistical analysis (tabulating machines). The most complicated problems solved on punch-card machines were ordinary second-order linear differential equations.

By the 1970’s punch-card machines had been in virtually every case supplanted by the more efficient and versatile electronic digital computers. The historical importance of punch-card machines was that they provided experience in the processing of information by machine and made possible an understanding of the prerequisites for the construction of automatic digital computers.

Regardless of their physical structure, digital computers that operate automatically must be able to perform operations, including arithmetic operations, on quantities (“words”) given in digital form, and they must be able to store both the results of the computations and the input information, which consists of the input data and the program describing the computational algorithm to be used. Other functional capabilities required of such computers are control of the computational process—that is, automatic adjustment of the machine to carry out each successive operation mandated by the program—and “communication” with the operator—that is, acceptance from him of the input information and presentation to him of the computational results he needs. The functions of the computer are usually carried out by corresponding units (Figure 1), although a partial combination of functions in a single unit is sometimes encountered. In any case, an automatic digital computer must perform all functions listed.

Figure 1. Typical block diagram of a digital computer

Every digital computer has “digital elements” with a finite number of stable states; the number of such states must be equal to the base of the number system used in the given machine. For example, in mechanical desk calculators, such as adding machines, these elements are the number wheels, which may take ten definite positions, in accordance with the decimal number system. Since electronic digital elements are most easily constructed with two stable states, the binary number system, which has only the two digits 0 and 1, is preferred in electronic digital computers.

The adoption of the binary number system facilitated the representation of numbers and considerably simplified the performance of operations on them. For example, the digital element of an adder must in this case have the property of changing to the opposite state whenever it receives the signal 1, which corresponds to the addition of 1 ; if the digital element is already in the 1 state, the change in its state must be accompanied by the sending of a 1 signal to the adder’s digital element for the next higher place. The operation of multiplication reduces to the multiple addition of the multiplicand and to shifts, and division reduces to subtraction and shifts.

Logical operations are also considerably simplified in the binary system. Any complex computational algorithm usually contains a variety of instructions, involving, for example, branching of the computational process, repetition of computational procedures, and the placement of various conditions on the accuracy of the computations. The machine must “understand” these instructions and must “make decisions” regarding their timely execution. Such actions of the machine are not arithmetic; rather, they are used for logical analysis of situations. One of the most common machine procedures is to perform a certain step—that is, to execute a certain program instruction—if a certain situation exists and to transfer to the execution of some other instruction if the situation does not exist. Because electronic digital computers can perform both arithmetic and logical operations, their capabilities go far beyond their direct purpose of arithmetic calculations, and they can be regarded as general-purpose transformers of discrete data. Since continuous data can practically always be approximated by discrete data, modern electronic computers can be said to be general-purpose transformers of data of any type.

The first truly electronic digital computer was the Electronic Numerical Integrator and Calculator (ENIAC), which was completed in 1945 and went into operation in 1946 in the USA. The construction of the first electronic digital computers did not require the invention of new elements, since such elements were already used in automatic control systems and, in particular, in radar installations. It was necessary merely to adapt the existing elements for use in a digital computer. The digital element of the first electronic machines was a flip-flop composed of two triodes. As a result of the use of such a digital element, the first electronic digital computer contained a large number of electron tubes and had a low reliability. Nevertheless, the ENIAC marked the beginning of the history of electronic digital computers. The ENIAC showed that it was in principle possible to construct an automatic digital computer that operated according to a predetermined program; some technological improvements were all that was required. The appearance of the ENIAC gave rise to energetic work in many countries on the development of reliable electronic digital elements and the design of efficient structures for digital computers.

The period of basic research in the history of digital computers ended by the early 1950’s with the development of the typical first-generation digital computer. In such a computer the digital element of the main memory was a ring-shaped ferrite core, with a rectangular hysteresis loop, having two stable states of residual magnetization. The basic element of the control unit and the arithmetic unit was a flip-flop made from electron tubes. The reliability of first-generation digital computers was substantially higher than that of the first digital computers. This increase in reliability was due not only to the replacement of the flip-flops in the memory of the computer with ferrite cores but also to a number of technological advances.

Since for purely engineering reasons it was not possible to construct a large-capacity ferrite memory unit with a high access speed, the digital computer used, and has used up to the present day, not only ferrite core memory units but also relatively slow-access peripheral, or auxiliary, memory units based on magnetic tapes, magnetic disks, and magnetic drums. The capacity of such units is limited, generally speaking, only by the amount of space they occupy.

The continuously growing complexity of the problems solved with digital computers required that the machines be made more complex and that the number of electronic elements be increased. These needs meant an increase in the size and power consumption of electron-tube machines. Despite engineering improvements, the electron tube remained the most unreliable element of the first-generation digital computer and slowed the further development of digital computer equipment.

In the mid-1950’s electron tubes were supplanted by semiconductor devices—that is, by diodes and transistors. Since the life of a semiconductor device is considerably longer than that of electron tubes, the adoption of the new basic elements substantially increased the reliability of digital computers and considerably reduced the size of the machines. The introduction of semiconductor digital elements marked the beginning of the second generation of electronic digital computers.

The evolution of computers has been accompanied by an increase in their speed. The speed of first-generation machines grew from several hundred operations per second to several tens of thousands of operations per second. The first transistorized machines had a speed of the order of 5,000 operations per second. Much higher speeds were subsequently obtained. For example, the CDC 7600 computer developed in the USA could perform 10–15 million operations per second.

Because of the way in which the computational process was organized in first-generation digital computers, however, further increases in the speed of such machines produced virtually no increase in productivity. A program was read into the computer for the solution of certain problems, and a new problem could not be introduced until the completion of the final solution and the output of the results of the computations. Every computational process, however, involves not only such fast operations as arithmetic operations and certain logical operations but also slow operations performed by mechanical devices—for example, the introduction of the input information into the machine, the printing of the results of the computations, and the transfer of information from the auxiliary memory to the main memory. As the speed of computers grew, the slow operations occupied an increasingly large part of the overall operating time of the machine. While the slow operations were carried out, the fast units of the machine, for example, the arithmetic unit, were idle. Thus, improvements affecting only the electronic elements did not result in significant increases in the productivity of digital computers.

The 1960’s consequently saw a fundamental change in the structure of digital computers. Machines were constructed in which the various units were capable of working independently of one another according to different programs. This advance permitted the machine to handle several jobs simultaneously (seeMULTIPROGRAMMING). For example, while a slow operation, which may last several seconds, is being performed, the arithmetic unit has time to solve not one but several problems. The most efficient modern digital computers can handle several dozen tasks at the same time. The work of the digital computer and the formation of the job stream are controlled by a special program known as the operating system. Multiprogramming operation does not increase the speed at which a specific problem is solved but substantially raises the overall productivity of the computer.

The next stage in the development of multiprogramming operation was the construction of shared-access computers (seeCOMPUTER NETWORK). The input of jobs to such a machine need not be from a single input device; there may be many such devices, and they need not be located in the same place as the computer. Each user of the machine has his own terminal, which includes an input device and an output device. The terminal may be at a considerable distance from the computer. Jobs are transmitted from the terminals along communication lines, usually telephone lines, to the computer, which determines their order, that is, the time at which they are to be executed. The results of the computer’s work are sent back along the communication lines to the user’s terminal, whose output device may be a printer or a display unit (seeDATA DISPLAY).

The development of multiprogramming led to the creation of systems that joined together several shared-access digital computers having different levels of productivity. Such a system could serve tens or hundreds of users located in different cities and even in different countries. For digital computers to be used in this way, their functional capabilities had to be expanded, and, consequently, their structure had to be made more complex. Semiconductor elements no longer satisfied the requirements of digital computer development regarding size, energy consumption, cost, and reliability.

Second-generation digital computers were succeeded in the 1960’s by third-generation machines, which used integrated circuitry. The elementary building blocks of second-generation computers were constructed from individual components (such as diodes, transistors, capacitors, and resistors), which were connected through soldering. Although these building blocks were considerably smaller than the banks of electron tubes used in first-generation machines, their dimensions were substantial, reaching tens and even hundreds of cubic centimeters. Moreover, the soldering points were sources of frequent failure. The introduction of integrated circuitry permitted an increase in complexity of the building blocks of the digital computer without an increase in size. The first integrated circuits (IC) replaced a single block of a second-generation digital computer. The subsequent large-scale integrated circuits (LSI circuits) replaced several dozen integrated circuits. The packing density of microcircuitry has continued to grow.

Machines constructed with large-scale integrated circuits are sometimes classified as fourth-generation digital computers. Such a classification is not well-founded, since there is no clear boundary between small-scale and medium-scale integrated circuits, between medium-scale and large-scale integrated circuits, or between large-scale and ultralarge-scale integrated circuits. A considerably more important factor in the development of electronic digital computers has been the nature of the basic elements of the main memory. Instead of the ferrite core memory units of first-, second-, and third-generation computers, fourth-generation computers use as memory elements semiconductor devices fabricated by a technology similar to that for integrated circuits. Small-capacity models of such a memory were constructed and used in the early 1970’s as ultrahigh-access memory. The mid-1970’s saw a tendency to combine a semiconductor main memory with slower supplementary ferrite memory units.

The 1970’s were characterized by a polarization in digital computer technology. On the one hand, the use of shared-access computer systems led to the development of superpowerful machines with a speed of the order of several tens of millions of operations per second and with a large main memory. On the other hand, small machines known as minicomputers were developed for individual use, for the control of industrial processes, and for the processing of experimental data in research laboratories; these machines included table models and were of medium speed. Minicomputers can be connected through communication lines with powerful shared-access computer systems and thus can be used as terminals. The prefix “mini-” refers primarily to the size of the machine; the productivity of minicomputers is often greater than that of the most powerful first-generation machines. Since minicomputers can meet most of the needs of the individual user, and since complex jobs can be handled better by shared-access computer systems, the production of medium-capacity digital computers has dropped.

In the late 1960’s and early 1970’s superpowerful multiprocessors were developed. In such a machine several processors operate simultaneously, that is, in parallel. The advantage of multiprocessing systems for the simultaneous execution of multiple jobs is obvious. The presence in one computer system of several processors also permits, in principle, the execution of a single job to be broken up, since any real computational algorithm contains a number of branches that may be executed independently of one another. Multiprocessing thus considerably reduces the time required for the execution of a job. Multiprocessing systems using large-scale integrated circuitry should evidently be classified as fourth-generation machines.

Digital computers are finding increasing application in a variety of areas of human activity. In addition to the performance of complicated scientific and engineering calculations, their uses in the late 1970’s included the automated design and the automatic control of industrial processes, transportation systems, and complex experimental apparatus. Digital computers were used to perform economic computations in economic-statistical analysis, the compilation of demographic statistics, the construction of plans, operations research, bookkeeping, and inventory control. Other important uses included mathematical modeling in the “descriptive” sciences—for example, biology, medicine, geology, and sociology—and information storage for ready reference in science and technology, in traffic-control systems, and in libraries.

### REFERENCES

Kitov, A. I., and N. A. Krinitskii. Elektronnye tsifrovye mashiny i programmirovanie, 2nd ed. Moscow, 1961.
Mul’tiprotsessornye vychislitel’nye sistemy. Edited by Ia. A. Khetagurov. Moscow, 1971.
Kagan, B. M., and M. M. Kanevskii. Tsifrovye vychislitel’nye mashiny i sistemy, 2nd ed. Moscow, 1973.
Bardizh, V. V. Magnitnye elementy tsifrovykh vychislitel’nykh mashin, 2nd ed. Moscow, 1974.
Apokin, I. A., and L. E. Maistrov. Razvitie vychislitel’nykh mashin. Moscow, 1974.
Presnukhin, L. N., and P. V. Nesterov. Tsifrovye vychislitel’nye mashiny. Moscow, 1974.
Korolev, L. N. Struktury EVM i ikh matematicheskoe obespechenie. Moscow, 1974.

A. A. DORODNITSYN

## digital computer

[′dij·əd·əl kəm′pyüd·ər]
(computer science)
A computer operating on discrete data by performing arithmetic and logic processes on these data.
McGraw-Hill Dictionary of Scientific & Technical Terms, 6E, Copyright © 2003 by The McGraw-Hill Companies, Inc.

## Digital computer

A device that processes numerical information; more generally, any device that manipulates symbolic information according to specified computational procedures. The term digital computer—or simply, computer—embraces calculators, computer workstations, control computers (controllers) for applications such as domestic appliances and industrial processes, data-processing systems, microcomputers, microcontrollers, multiprocessors, parallel computers, personal computers, network servers, and supercomputers. See Digital control, Microcomputer, Programmable controllers, Supercomputer

A digital computer is an electronic computing machine that uses the binary digits (bits) 0 and 1 to represent all forms of information internally in digital form. Every computer has a set of instructions that define the basic functions it can perform. Sequences of these instructions constitute machine-language programs that can be stored in the computer and used to tailor it to an essentially unlimited number of specialized applications. Calculators are small computers specialized for mathematical computations. General-purpose computers range from pocket-sized personal digital assistants (notepad computers), to medium-sized desktop computers (personal computers and workstations), to large, powerful computers that are shared by many users via a computer network. The vast majority of digital computers now in use are inexpensive, special-purpose microcontrollers that are embedded, often invisibly, in such devices as toys, consumer electronic equipment, and automobiles. See Bit, Computer programming, Embedded systems

The main data-processing elements of a computer reside in a small number of electronic integrated circuits (ICs) that form a microprocessor or central processing unit (CPU). Electronic technology allows a basic instruction such as “add two numbers” to be executed many millions of times per second. Other electronic devices are used for program and data storage (memory circuits) and for communication with external devices and human users (input-output circuits). Nonelectronic (magnetic, optical, and mechanical) devices also appear in computers. They are used to construct input-output devices such as keyboards, monitors (video screens), secondary memories, printers, sensors, and mechanical actuators.

Information is stored and processed by computers in fixed-sized units called words. Common word sizes are 8, 16, 32, and 64 bits. Four-bit words can be used to encode the first 16 integers. By increasing the word size, the number of different items that can be represented and their precision can be made as large as desired. A common word size in personal computers is 32 bits, which allows 232 = 4,294,967,296 distinct numbers to be represented.

Computer words can represent many different forms of information, not just numbers. For example, 8-bit words called characters or bytes are used to encode text symbols (the 10 decimal digits, the 52 upper- and lowercase letters of the English alphabet, and punctuation marks). A widely used code of this type is ASCII (American Standard Code for Information Interchange). Visual information can be reduced to black and white dots (pixels) corresponding to 0's and 1's. Audio information can be digitized by mapping a small element of sound into a binary word; for example, a compact disk (CD) uses several million 16-bit words to store an audio recording. Logical quantities encountered in reasoning or decision making can be captured by associating 1 with true and 0 with false. Hence, most forms of information are readily reduced to a common, numberlike binary format suitable for processing by computer.

#### Logic components

The operation of a digital computer can be viewed at various levels of abstraction, which are characterized by components of different complexity. These levels range from the low, transistor level seen by an electronic circuit designer to the high, system level seen by a computer user. A useful intermediate level is the logic level, where the basic components process individual bits. By using other basic components called gates, logic circuits can be constructed to perform many useful operations. See Logic circuits

#### System organization

An accumulator is a digital system that constitutes a simple processor capable of executing a few instructions. By introducing more data-processing circuits and registers, as well as control circuits for a larger set of instructions, a practical, general-purpose processor can be constructed. Such a processor forms the “brain” of every computer, and is referred to as its central processing unit. A CPU implemented on a single integrated-circuit chip is called a microprocessor.

A typical computer program is too large to store in the CPU, so another component called the main memory is used to store a program's instructions and associated data while they are being executed (Fig. 1). Main memory consists of high-speed integrated circuits designed to allow storage and retrieval of information one word at a time. All words in main memory can be accessed with equal ease; hence this is also called a random-access memory (RAM).

A computer program is processed by loading it into main memory and then transferring its instructions and data one word (or a few words) at a time to the CPU for processing. Hence, there is a continual flow of instructions and data words between the CPU and its main memory. As millions of words must be transferred per second, a high-speed communication link is needed between the CPU and main memory. The system bus (Fig. 1) fills this role.

General organization of a computer

A computer has input-output (I/O) control circuits and buses to connect it to external input-output devices (also called peripherals). Typical input-output devices are a keyboard, which is an input device, and a printer, which is an output device. Because most computers need more storage space than main memory can supply, they also employ secondary memory units which form part of the computer's input-output subsystem. Common secondary memory devices are hard disk drives, flexible (floppy) disk drives, and magnetic tape units. Compared to main memory, secondary memories employ storage media (magnetic disks and tapes) that have higher capacity and lower cost. However, secondary memories are also significantly slower than main memory. See Computer peripheral devices, Computer storage technology

No explicit instructions are needed for input-output operations if input-output devices share with main memory the available memory addresses. This is known as memory-mapped input-output, and allows load and store instructions to be used to transfer data between the CPU and input-output devices. In general, a computer's instruction set should include a selection of instructions of the following three types: (1) Data-transfer instructions that move data unchanged between the CPU, main memory, and input-output devices. (2) Data-processing instructions that perform numerical operations such as add, subtract, multiply, and divide, as well as nonnumerical (logical) operations, such as not , and , exclusive-or , and shift . (3) Program-control instructions that can change the order in which instructions are executed, for example branch, branch-on-zero, call procedure, and return from procedure.

The instruction unit (I unit) of a CPU (Fig. 2), also called the program control unit, is responsible for fetching instructions from main memory, using the program counter as the instruction address register. The opcode of a newly fetched instruction I is placed in the instruction register. The opcode is then decoded to determine the sequence of actions required to execute I. These may include the loading or storing of data assigned to main memory, in which case the I unit computes all needed addresses and issues all needed control signals to the CPU and the system bus. Data are processed in the CPU's execution unit (E unit), also called the datapath, which contains a set of registers used for temporary storage of data operands, and an arithmetic logic unit (ALU), which contains the main data-processing circuits.

Internal organization of a CPU

#### Performance measures

A simple indicator of a CPU's performance is the frequency f of its central timing signal (clock), measured in millions of clock signals issued per second or megahertz (MHz). The clock frequency depends on the integrated-circuit technology used; frequencies of several hundred megahertz are achievable with current technology. Each clock signal triggers execution of a basic instruction such as a fixed-point addition; hence, the time required to execute such an instruction (the clock cycle time) is 1/f microseconds. Complex instructions like multiplication or operations on floating-point numbers require several clock cycles to complete their execution. Another measure of CPU performance is the (average) instruction execution rate, measured in millions of instructions per second (MIPS).

Instruction execution time is strongly affected by the time to move instructions or data between the CPU and main memory. The time required by the CPU to access a word in main memory is typically about five times longer than the CPU's clock cycle time. This disparity in speed has existed since the earliest computers despite efforts to develop memory circuits that would be fast enough to keep up with the fastest CPUs. Maximum performance requires the CPU to be supplied with a steady flow of instructions that need to be executed. This flow is disrupted by branch instructions, which account for 20% or more of the instructions in a typical program.

To deal with the foregoing issues, various performance-enhancing features have been incorporated into the design of computers. The communication bottleneck between the CPU and main memory is reduced by means of a cache, which is a special memory unit inserted between the two units. The cache is smaller than main memory but can be accessed more rapidly, and is often placed on the same integrated-circuit chip as the CPU. Its effect is to reduce the average time required by the CPU to send information to or receive information from the memory subsystem. Special logic circuits support the complex flow of information among main memory, the cache, and the registers of the CPU. However, the cache is largely invisible to the programs being executed.

The instruction execution rate can be increased by executing several instructions concurrently. One approach is to employ several E units that are tailored to different instruction types. Examples are an integer unit designed to execute fixed-point instructions and a floating-point unit designed for floating-point instructions. The CPU can then execute a fixed-point instruction and a floating-point instruction at the same time. Processors that execute several instructions in parallel in this way are called superscalar. See Concurrent processing

Another speedup technique called pipelining allows several instructions to be processed simultaneously in special circuits called pipelines. Execution of an instruction is broken into several consecutive steps, each of which can be assigned to a separate stage of the pipeline. This makes it possible for an n-stage E unit to overlap the execution of up to n different instructions. A pipeline processing circuit resembles an assembly line on which many products are in various stages of manufacture at the same time. The ability of a CPU to execute several instructions at the same time by using multiple or pipelined E units is highly dependent on the availability of instructions of the right type at the right time in the program being executed. A useful measure of the performance of a CPU that employs internal parallelism is the average number of clock cycles per instruction (CPI) needed to execute a representative set of programs.

#### CISCs and RISCs

A software implementation of a complex operation like multiply is slower than the corresponding hardware implementation. Consequently, as advances in IC technology lowered the cost of hardware circuits, instruction sets tended to increase in size and complexity. By the mid-1980s, many microprocessors had instructions of several hundred different types, characterized by diverse formats, memory addressing modes, and execution times. The heterogeneous instruction sets of these complex instruction set computers (CISCs) have some disadvantages. Complex instructions require more processing circuits, which tend to make CISCs large and expensive. Moreover, the decoding and execution of complex instruction can slow down the processing of simple instructions.

To address the defects of CISCs, a new class of fast computers referred to as reduced instruction set computers (RISCs) was introduced. RISCs are characterized by fast, efficient—but not necessarily small—instruction sets. The following features are common to most RISCs: (1) All instructions are of fixed length and have just a few opcode formats and addressing modes. (2) The only instructions that address memory are load and store instructions; all other instructions require their operands to be placed in CPU registers. (3) The fetching and processing of most instructions is overlapped in pipelined fashion.

McGraw-Hill Concise Encyclopedia of Engineering. © 2002 by The McGraw-Hill Companies, Inc.

## digital computer

an electronic computer in which the input is discrete rather than continuous, consisting of combinations of numbers, letters, and other characters written in an appropriate programming language and represented internally in binary notation
Collins Discovery Encyclopedia, 1st edition © HarperCollins Publishers 2005

## digital computer

(computer)
A computer that represents numbers and other data using discrete internal states, in contrast to the continuously varying quantities used in an analog computer.

Some of the fundamental ideas behind the digital computer were proposed by Alan Turing between 1936 and 1938. The design of the Atanasoff-Berry Computer (1937-1942) included some of the important implementation details but the first digital computer to successfully run real programs was the Z3 (1941). ENIAC (1943-1946) was the first electronic digital computer but was only programmable by manual rewiring or switches.

## digital computer

A computer that accepts and processes data that have been converted into binary numbers. Most computers are digital. Contrast with analog computer. See binary.
Copyright © 1981-2019 by The Computer Language Company Inc. All Rights reserved. THIS DEFINITION IS FOR PERSONAL USE ONLY. All other reproduction is strictly prohibited without permission from the publisher.
References in periodicals archive ?
Perhaps, the time has come to face the possibility that it never will be realized with a digital computer.
Brains are better at some things, while silicon digital computers excel at others.
Application to the prediction of solidification patterns appeared in 1965, when using the Transient Heat Transfer (THT) digital computer program, developed in 1959 by Campbell and Vollenweider, General Electric Co.
Yokogawa Digital Computer Corporation is one of the leading providers of in-circuit emulators and on-board Flash programmers globally.
The advent of computerized color matching for plastics was unveiled at NPE 1968, with the launch of the Comic II digital computer, which verified colors by spectrophotometry and was showcased by Kolhnorgen.
* Iowa professor John Atonasoff designs the first working model of on electronic digital computer, using vacuum tubes.
From punch cards to the first digital computer companies, Yost provides a clear explanation of the changes and the computer's quick evolution for students of technological or social history.
Aiken, who developed the first large scale digital computer, called the IBM automatic sequence controlled calculator, more commonly known as the Harvard Mark 1.
Part PN NSN Digital computer system 726628-802 7010-01-478-8648 Digital computer system 9900-06000-9004 none Circuit card assembly 2600QPSK none Circuit card assembly VG-250L-N 5998-01-476-4909 Antenna control 726663-801 5985-01-483-5317 Data processing terminal 9900-06041-0001 7025-01-484-6729 Disk drive unit 9800-06030-0000 7025-01-483-0231 GRT antenna assembly G772514-1 5985-01-476-3684 GRT pedestal assembly G774409-1 5985-01-476-3649 GRT reflector assembly G772396-1 5985-01-476-0544 Antenna tripod 4-63120-SAT 5985-01-476-3646 GRT feedome assembly G772503-1 5985-01-476-0545 GRT RF cable assembly G772732-1 5995-01-476-1032
To the Editor: One point requires clarification in the very interesting summary of the origins of the digital computer ("Oh, What a Web We Punch," Input/Output, December 2007).
A computer certainly has the ability to create millions of contrapuntal lines of music, but there is no way for a digital computer to make the purely human decision that some particular lines of music are pleasing.
Developed under a \$ 964 million contract awarded by the Indian Air Force in 2008, the upgrade package includes two RD-33 turbo-fan jet engines of the 3M version with a Bark electronic control system, a KSU-941UB remote control system, a K-36D-3.5 ejection seat, a central digital computer, a Zhuk-ME radar, a helmet-mounted targeting sight, upgraded navigation and radio systems and, as an import item, a Thales IFF system.

Site: Follow: Share:
Open / Close