Also found in: Dictionary, Thesaurus, Medical, Financial, Acronyms, Wikipedia.
computer, device capable of performing a series of arithmetic or logical operations. A computer is distinguished from a calculating machine, such as an electronic calculator, by being able to store a computer program (so that it can repeat its operations and make logical decisions), by the number and complexity of the operations it can perform, and by its ability to process, store, and retrieve data without human intervention. Computers developed along two separate engineering paths, producing two distinct types of computer—analog and digital. An analog computer operates on continuously varying data; a digital computer performs operations on discrete data.
Computers are categorized by both size and the number of people who can use them concurrently. Supercomputers are sophisticated machines designed to perform complex calculations at maximum speed; they are used to model very large dynamic systems, such as weather patterns. Mainframes, the largest and most powerful general-purpose systems, are designed to meet the computing needs of a large organization by serving hundreds of computer terminals at the same time. Minicomputers, though somewhat smaller, also are multiuser computers, intended to meet the needs of a small company by serving up to a hundred terminals. Microcomputers, computers powered by a microprocessor, are subdivided into personal computers and workstations, the latter typically incorporating RISC processors. Although microcomputers were originally single-user computers, the distinction between them and minicomputers has blurred as microprocessors have become more powerful. Linking multiple microcomputers together through a local area network or joining multiple microprocessors together in a parallel-processing system has enabled smaller systems to perform tasks once reserved for mainframes, and the techniques of grid computing have enabled computer scientists to utilize the unemployed processing power of computers connected over a network or the Internet.
Advances in the technology of integrated circuits have spurred the development of smaller and more powerful general-purpose digital computers. Not only has this reduced the size of the large, multi-user mainframe computers—which in their early years were large enough to walk through—to that of pieces of furniture, but it has also made possible powerful, single-user personal computers and workstations that can sit on a desktop or be easily carried. These, because of their relatively low cost and versatility, have replaced typewriters in the workplace and rendered the analog computer inefficient. The reduced size of computer components has also led to the development of thin, lightweight notebook computers and even smaller computer tablets and smartphones that have much more computing and storage capacity than that of the desktop computers that were available in the early 1990s.
Processing of Data
Storage and Retrieval of Data
Associated with the CPU is the main storage, or memory, where results or other data are stored for periods of time ranging from a small fraction of a second to days or weeks before being retrieved for further processing. Once made up of vacuum tubes and later of small doughnut-shaped ferromagnetic cores strung on a wire matrix, main storage now consists of integrated circuits, each of may contain billions of semiconductor devices. Where each vacuum tube or core represented one bit and the total memory of the computer was measured in thousands of bytes (or kilobytes, KB), modern computer memory chips represent hundreds of millions of bytes (or megabytes, MB) and the total memory of both personal and mainframe computers is measured in billions of bytes (gigabytes, GB) or more. Read-only memory (ROM), which cannot be written to, maintains its content at all times and is used to store the computer's control information. Random-access memory (RAM), which both can be read from and written to, is lost each time the computer is turned off. Modern computers now include cache memory, which the CPU can access faster than RAM but slower than the registers; data in cache memory also is lost when the computer is turned off.
Programs and data that are not currently being used in main storage can be saved on auxiliary or secondary storage. Although punched paper tape and punched cards once served this purpose, the major materials used today are magnetic tape and disks and flash memory devices, all of which can be read from and written to, and two types of optical disks, the compact disc (CD) and its successor the digital versatile disc (DVD). When compared to RAM, these are less expensive (though flash memory is more expensive than the other two), are not volatile (i.e., data is not lost when the power to the computer is shut off), and can provide a convenient way to transfer data from one computer to another. Thus operating instructions or data output from one computer can be stored and be used later either by the same computer or another.
In a system using magnetic tape the information is stored by a specially designed tape recorder somewhat similar to one used for recording sound. Magnetic tape is now largely used for offsite storage of large volumes of data or major systems backups. In magnetic and optical disk systems the principle is the same; the magnetic or optical medium lies in a path, or track, on the surface of a disk. The disk drive also contains a motor to spin the disk and a magnetic or optical head or heads to read and write the data to the disk. Drives take several forms, the most significant difference being whether the disk can be removed from the drive assembly. Flash memory devices, such as USB flash drives, flash memory cards, and solid-state drives, use nonvolatile memory that can be erased and reprogrammed in blocks.
Removable magnetic disks made of mylar enclosed in a plastic holder (older versions had paper holders) are now largely outdated. These floppy disks have varying capacities, with very high density disks holding 250 MB—more than enough to contain a dozen books the size of Tolstoy's Anna Karenina. Internal and external magnetic hard disks, or hard drives, are made of metal and arranged in spaced layers. They can hold vastly more data than floppies or optical disks, and can read and write data much faster than floppies. As hard disks dropped in price, they became increasingly included as a component of personal computers and replaced floppy disks as the standard media for the storage of operating systems, programs, and data.
Compact discs can hold hundreds of megabytes, and have been used, for example, to store the information contained in an entire multivolume encyclopedia or set of reference works. DVD is an improved optical storage technology capable of storing as much as ten times the data that CD technology can store. CD–Read-Only Memory (CD-ROM) and DVD–Read-Only Memory (DVD-ROM) disks can only be read—the disks are impressed with data at the factory but once written cannot be erased and rewritten with new data. The latter part of the 1990s saw the introduction of new optical storage technologies: CD-Recordable (CD-R) and DVD-Recordable (DVD-R, DVD+R), optical disks that can be written to by the computer to create a CD-ROM or DVD-ROM, but can be written to only once; and CD-ReWritable (CD-RW), DVD-ReWritable (DVD-RW and DVD+RW), and DVD–Random Access Memory (DVD-RAM), disks that can be written to multiple times.
Flash memory devices, a still more recent development, are an outgrowth of electrically erasible programmable read-only memory. Although more expensive than magnetic and optical storage technologies, flash memory can be read and written to much faster, permitting shorter boot times and quicker data access and storage. Because flash memory also is resistant to mechanical shock and has become increasingly compact, a USB flash drive allows for the easy, portable external storage of large quantities of data. Solid-state drives are more easily accessed and written to than magnetic hard drives and use less power, and have become common in high-end, lightweight notebook computers and in high-performance computers. Flash memory is also used in computer tablets and smartphones. Hybrid drives, which combine a smaller amount of flash memory with a large magnetic hard drive, permit the economical storage of large amounts of data while benefitting from a more responsive access to frequently used but only occasionally changed operating system and program files.
Data are entered into the computer and the processed data made available via input/output devices, also called peripherals. All auxiliary storage devices are used as input/output devices. For many years, the most popular input/output medium was the punched card. The most popular input devices are the computer terminal and internal magnetic hard drives, and the most popular output devices are the computer display screen associated with a terminal (typically displaying output that has been processed by a graphics processing unit) and the printer. Human beings can directly communicate with the computer through computer terminals, entering instructions and data by means of keyboards much like the ones on typewriters, by using a pointing device such as a mouse, trackball, or touchpad, or by speaking into a microphone that is connected to computer running voice-recognition software. The result of the input may be displayed on a liquid-crystal, light-emitting diode, or cathode-ray tube screen or on printer output. Another important input/output device in modern computers is the network card, which allows the computer to connect to a computer network and the Internet using a wired or radio (wireless) connection. The CPU, main storage, auxiliary storage, and input/output devices collectively make up a cumputer system.
Sharing the Computer's Resources
Generally, the slowest operations that a computer must perform are those of transferring data, particularly when data is received from or delivered to a human being. The computer's central processor is idle for much of this period, and so two similar techniques are used to use its power more fully.
Time sharing, used on large computers, allows several users at different terminals to use a single computer at the same time. The computer performs part of a task for one user, then suspends that task to do part of another for another user, and so on. Each user only has the computer's use for a fraction of the time, but the task switching is so rapid that most users are not aware of it. Most of the tens of millions of computers in the world are stand-alone, single-user devices known variously as personal computers or workstations. For them, multitasking involves the same type of switching, but for a single user. This permits a user, for example, to have one file printed and another uploaded to an Internet website while editing a third in a word-processing session and listening to a recording streamed over the Internet. Personal computers can also be linked together in a network, where each computer is connected to others, usually by network, coaxial, or fiber-optic cable or by radio signals (wireless), permitting all to share resources such as printers, hard-disk storage devices, and an Internet connection. Cloud computing is another form of resource sharing. Delivering access to both hardware and software over a network, most often the Internet, cloud computing is designed to allow many individuals and organizations using a wide range of devices both ease of access to computing resources and flexibility in changing the type and volume of the resources to which they have access.
Computer Programs and Programming Languages
Before a computer can be used for a given purpose, it must first be programmed, that is, prepared for use by loading a set of instructions, or program. The various programs by which a computer controls aspects of its operations, such as those for translating data from one form to another, are known as software, as contrasted with hardware, which is the physical equipment comprising the installation. In most computers the moment-to-moment control of the machine resides in a special software program called an operating system, or supervisor. Other forms of software include assemblers and compilers for programming languages and applications for business and home use (see computer program). Software is of great importance; the usefulness of a highly sophisticated array of hardware can be limited by the lack of adequate software.
Each instruction in the program may be a simple, single step, telling the computer to perform some arithmetic operation, to read the data from some given location in the memory, to compare two numbers, or to take some other action. The program is entered into the computer's memory exactly as if it were data, and on activation, the machine is directed to treat this material in the memory as instructions. Other data may then be read in and the computer can carry out the program to complete the particular task.
Since computers are designed to operate with binary numbers, all data and instructions must be represented in this form; the machine language, in which the computer operates internally, consists of the various binary codes that define instructions together with the formats in which the instructions are written. Since it is time-consuming and tedious for a programmer to work in actual machine language, a programming language, or high-level language, designed for the programmer's convenience, is used for the writing of most programs. The computer is programmed to translate this high-level language into machine language and then solve the original problem for which the program was written. Many high-level programming languages are now universal, varying little from machine to machine.
Development of Computers
Although the development of digital computers is rooted in the abacus and early mechanical calculating devices, Charles Babbage is credited with the design of the first modern computer, the “analytical engine,” during the 1830s. Vannevar Bush built a mechanically operated device, called a differential analyzer, in 1930; it was the first general-purpose analog computer. John Atanasoff constructed the first electronic digital computing device in 1939; a full-scale version of the prototype was completed in 1942 at Iowa State College (now Iowa State Univ.). In 1943 Conrad Zuse built the Z3, a fully operational electromechanical computer.
During World War II, the Colossus was developed for British codebreakers; it was the first programmable electronic digital computer. The Mark I, or Automatic Sequence Controlled Calculator, completed in 1944 at Harvard by Howard Aiken, was the first machine to execute long calculations automatically, while the first all-purpose electronic digital computer, ENIAC (Electronic Numerical Integrator And Calculator), which used thousands of vacuum tubes, was completed in 1946 at the Univ. of Pennsylvania. UNIVAC (UNIVersal Automatic Computer) became (1951) the first computer to handle both numeric and alphabetic data with equal facility; intended for business and government use, this was the first widely sold commercial computer.
First-generation computers were supplanted by the transistorized computers (see transistor) of the late 1950s and early 60s, second-generation machines that were smaller, used less power, and could perform a million operations per second. They, in turn, were replaced by the third-generation integrated-circuit machines of the mid-1960s and 1970s that were even smaller and were far more reliable. The 1970s, 80s, and 90s were characterized by the development of the microprocessor and the evolution of increasingly smaller but powerful computers, such as the personal computer and personal digital assistant (PDA), which ushered in a period of rapid growth in the computer industry.
The World Wide Web was unveiled in 1990, and with the development of graphical web browser programs in succeeding years the Web and the Internet spurred the growth of general purpose home computing and the use of computing devices as a means of social interaction. Smartphones, which integrate a range of computer software with a cellular telephone that now typically has a touchscreen interface, date to 2000 when a PDA was combined with a cellphone. Although computer tablets date to the 1990s, they only succeeded commercially in 2010 with the introduction of Apple's iPad, which built on software developed for smartphones. The increasing screen size on some smartphones has made them the equivalent of smaller computer tablets, leading some to call them phablets.
See S. G. Nash, A History of Scientific Computing (1990); D. I. A. Cohen, Introduction to Computer Theory (2d ed. 1996); P. Norton, Peter Norton's Introduction to Computers (2d ed. 1996); A. W. Biermann, Great Ideas in Computer Science: A Gentle Introduction (2d ed. 1997); R. L. Oakman, The Computer Triangle: Hardware, Software, People (2d ed. 1997); R. Maran, Computers Simplified (4th ed. 1998); A. S. Tanenbaum and J. R. Goodman. Structured Computer Organization (4th ed. 1998).
analog computer[′an·əl‚äg kəm′pyüd·ər]
A computer or computational device in which the problem variables are represented as continuous, varying physical quantities. An analog computer implements a model of the system being studied. The physical form of the analog may be functionally similar to that of the system, but more often the analogy is based solely upon the mathematical equivalence of the interdependence of the computer variables and the variables in the physical system. See Simulation
An analog computer is classified either in accordance with its use (general- or specific-purpose) or based on its construction (hydraulic, mechanical, or electronic). General-purpose implies programmability and adaptability to different applications or the ability to solve many kinds of problems. Most electronic analog computers were general-purpose systems, either real-time analog computers in which the results were obtained without any significant time-scale changes, or high-speed repetitive operation computers.
Since the 1970s, digital computer programs have been developed which essentially duplicate the functionality of the analog computer. Modern simulation languages, such as ACSL, GASP, GPSS, SLAM, and Simscript, have replaced electronic analog computers. They provide nearly the same highly interactive and parallel solution capabilities of electronic analog computers, but without the technical shortcomings of electronics: accuracy inherently limited to 0.01%, effective bandwidths of 1 MHz, and cumbersome and time-consuming programming. Simulation languages also avoid the large purchase investments and the continual maintenance dependencies of complex electronic systems.
Another type of analog computer is the digital multiprocessor analog system, in which the relatively slow speeds of sequential digital increment calculations have been radically boosted through parallel processing. In this type of analog computer it is possible to retain the programming convenience and data storage of the digital computer while approximating the speed, interaction potential, and parallel computations of the traditional electronic analogs.
The digital multiprocessor analog computer typically utilizes several specially designed high-speed processors for the numerical integration functions, the data (or variable) memory distributions, the arithmetic functions, and the decision (logic and control) functions. All variables remain as fixed or floating-point digital data, accessible at all times for computational and operational needs.
The typical modern general-purpose analog computer consists of a console containing a collection of operational amplifiers; computing elements, such as summing networks, integrator networks, attenuators, multipliers, and function generators; logic and interface units; control circuits; power supplies; a patch bay; and various meters and display devices. The patch bay is arranged to bring input and output terminals of all programmable devices to one location, where they can be conveniently interconnected by various patch cords and plugs to meet the requirements of a given problem. Prewired problem boards can be exchanged at the patch bay in a few seconds and new coefficients set up typically in less than a half hour. Extensive automatic electronic patching systems have been developed to permit fast setup, as well as remote and time-shared operation.
The analog computer basically represents an instrumentation of calculus, in that it is designed to solve ordinary differential equations. This capability lends itself to the implementation of simulated models of dynamic systems. The computer operates by generating voltages that behave like the physical or mathematical variables in the system under study. Each variable is represented as a continuously varying (or steady) voltage signal at the output of a programmed computational unit. Specific to the analog computer is the fact that individual circuits are used for each feature or equation being represented, so that all variables are generated simultaneously. Thus the analog computer is a parallel computer in which the configuration of the computational units allows direct interactions of the computed variables at all times during the solution of a problem.
To solve a problem using an analog computer, the problem solver goes through a procedure of general analysis, data preparation, analog circuit development, and patchboard programming. Test runs of subprograms may also be made to examine partial-system dynamic responses before eventually running the full program to derive specific and final answers. The problem-solving procedure typically involves eight major steps, as follows:
- The problem under study is described with a set of mathematical equations or, when that is not possible, the system configuration and the interrelations of component influences are defined in block-diagram form, with each block described in terms of black-box input-output relationships.
- Where necessary, the description of the system (equations or system block diagram) is rearranged in a form that may better suit the capabilities of the computer, that is, avoiding duplications or excessive numbers of computational units, or avoiding algebraic (nonintegrational) loops.
- The assembled information is used to sketch out an analog circuit diagram which shows in detail how the computer could be programmed to handle the problem and achieve the objectives of the study.
- System variables and parameters are then scaled to fall within the operational ranges of the computer. This may require revisions of the analog circuit diagram and choice of computational units.
- The finalized circuit arrangement is patched on the computer problem board.
- Numerical values are set up on the attenuators, the initial conditions of the entire system model established, and test values checked.
- The computer is run to solve the equations or simulate the black boxes so that the resultant values or system responses can be obtained. This gives the initial answers and the “feel” for the system.
- Multiple runs are made to check the responses for specific sets of parameters and to explore the influences of problem (system) changes, as well as the behavior which results when the system configuration is driven with different forcing functions.
The accuracy of the calculations on a digital computer can often be increased through double precision techniques and more precise algorithms, but at the expense of extended solution time, due to the computer's serial nature of operation. Also, the more computational steps there are to be done, the longer the digital computer will take to do them. On the other hand, the basic solution speed is very rapid on the analog computer because of its parallel nature, but increasing problem complexity demands larger computer size. Thus, for the analog computer the time remains the same regardless of the complexity of the problem, but the size of the computer required grows with the problem.
Interaction between the user and the computer during the course of any calculation, with the ability to vary parameters during computer runs, is a highly desirable and insight-generating part of computer usage. This hands-on interaction with the computed responses is simple to achieve with analog computers. For digital computers, interaction usually takes place through a computer keyboard terminal, between runs, or in an on-line stop-go mode. An often-utilized system combines the speed and interaction possibilities of an analog computer with the accuracy and programming flexibility of a digital computer. This combination is specifically designed into the hybrid computer.
In a modern analog-hybrid console, the mode switches in the integrators are interfaced with the digital computer to permit fast iterations of dynamic runs under digital computer control. Data flow in many ways and formats between the analog computer with its fast, parallel circuits and the digital computer with its sequential, logic-controlled programs. Special high-speed analog-to-digital and digital-to-analog converters translate between the continuous signal representations of variables in the analog domain and the numerical representations of the digital computer. Control and logic signals are more directly compatible and require only level and timing compatibility. See Analog-to-digital converter
The programming of hybrid models is a more complex challenge than described above, requiring the user to consider the parallel action of the analog computer interlaced with the step-by-step computations progression in the digital computer. For example, in simulating the mission of a space vehicle, the capsule control dynamics will typically be handled on the analog computer in continuous form, but interfaced with the digital computer, where the navigational trajectory is calculated. See Computer
analog computerA device that processes infinitely varying signals, such as voltage or frequencies. A thermometer is a simple analog computer. As the temperature varies, the mercury moves correspondingly. A slide rule is another example. Although special-purpose, complex analog computers are built, almost all computers are digital. Digital methods provide programming flexibility. See digital computer.
|The Antikythera Analog Computer|
|Estimated to be at least 2,000 years old and discovered in 1901, this is the largest gear (5.5") in the analog device found in the Antikythera Roman-era shipwreck. It modeled the movement of celestial bodies in order to calculate the position of the sun and moon at various times. (Image courtesy of Creative Commons license 2.5 Generic, http://creativecommons.org/licenses/by/2.5.)|