(redirected from Super-computing)
Also found in: Dictionary, Thesaurus.
Related to Super-computing: High Performance Computing


a state-of-the-art, extremely powerful computercomputer,
device capable of performing a series of arithmetic or logical operations. A computer is distinguished from a calculating machine, such as an electronic calculator, by being able to store a computer program (so that it can repeat its operations and make logical
..... Click the link for more information.
 capable of manipulating massive amounts of data in a relatively short time. Supercomputers are very expensive and are employed for specialized scientific and engineering applications that must handle very large databases or do a great amount of computation, among them meteorology, animated graphics, fluid dynamic calculations, nuclear energy research and weapon simulation, and petroleum exploration. There are two approaches to the design of supercomputers. One, called massively parallel processing (MPP), is to chain together thousands of commercially available microprocessorsmicroprocessor,
integrated circuit containing the arithmetic, logic, and control circuitry required to interpret and execute instructions from a computer program. When combined with other integrated circuits that provide storage for data and programs, often on a single
..... Click the link for more information.
 utilizing parallel processingparallel processing,
the concurrent or simultaneous execution of two or more parts of a single computer program, at speeds far exceeding those of a conventional computer.
..... Click the link for more information.
 techniques. A variant of this, called a Beowulf cluster, or cluster computing, employs large numbers of personal computers interconnected by a local area network and running programs written for parallel processing. The other approach, called vector processing, is to develop specialized hardware to solve complex calculations. This technique was employed (2002) in the Earth Simulator, a Japanese supercomputer with 640 nodes composed of 5104 specialized processors to execute 35.6 trillion mathematical operations per second; it is used to analyze earthquake and weather patterns and climate change, including global warming. Operating systems for supercomputers, formerly largely Unix-based, are now typically Linux-based.

Advances in supercomputing have regularly resulted in new supercomputers that significantly exceed the capabilities of those that are only a year older; by 2012 the fastest supercomputer was more than 250,000 times faster than the fastest in 1993 in terms of the number of calculations per second it could complete. Although calculation speed is the standard for measuring supercomputer power, it is not, however, an accurate indicator of everyday performance; most supercomputers are not fully utilized when running programs. Supercomputers can require significant amounts of electrical power, and many use water and refrigeration for cooling, but some are air-cooled and use no more power than the average home. In 2003 scientists at Virginia Tech assembled a relatively low-cost supercomputer using 1,100 dual-processor Apple Macintoshes; it was ranked at the time as the third fastest machine in the world.

The Columbia Electronic Encyclopedia™ Copyright © 2013, Columbia University Press. Licensed from Columbia University Press. All rights reserved.


(computer science)
A computer which is among those with the highest speed, largest functional size, biggest physical dimensions, or greatest monetary cost in any given period of time.
McGraw-Hill Dictionary of Scientific & Technical Terms, 6E, Copyright © 2003 by The McGraw-Hill Companies, Inc.


A computer which, among existing general-purpose computers at any given time, is superlative, often in several senses: highest computation rate, largest memory, or highest cost. Predominantly, the term refers to the fastest “number crunchers,” that is, machines designed to perform numerical calculations at the highest speed that the latest electronic device technology and the state of the art of computer architecture allow.

The demand for the ability to execute arithmetic operations at the highest possible rate originated in computer applications areas collectively referred to as scientific computing. Large-scale numerical simulations of physical processes are often needed in fields such as physics, structural mechanics, meteorology, and aerodynamics. A common technique is to compute an approximate numerical solution to a set of partial differential equations which mathematically describe the physical process of interest but are too complex to be solved by formal mathematical methods. This solution is obtained by first superimposing a grid on a region of space, with a set of numerical values attached to each grid point. Large-scale scientific computations of this type often require hundreds of thousands of grid points with 10 or more values attached to each point, with 10 to 500 arithmetic operations necessary to compute each updated value, and hundreds of thousands of time steps over which the computation must be repeated before a steady-state solution is reached. See Computational fluid dynamics, Numerical analysis, Simulation

Two lines of technological advancement have significantly contributed to what roughly amounts to a doubling of the fastest computers' speeds every year since the early 1950s—the steady improvement in electronic device technology and the accumulation of improvements in the architectural designs of digital computers.

Computers incorporate very large-scale integrated (VLSI) circuits with tens of millions of transistors per chip for both logic and memory components. A variety of types of integrated circuitry is used in contemporary supercomputers. Several use high-speed complementary metallic oxide semiconductor (CMOS) technology. Throughout most of the history of digital computing, supercomputers generally used the highest-performance switching circuitry available at the time—which was usually the most exotic and expensive. However, many supercomputers now use the conventional, inexpensive device technology of commodity microprocessors and rely on massive parallelism for their speed. See Computer storage technology, Concurrent processing, Integrated circuits, Logic circuits

Increases in computing speed which are purely due to the architectural structure of a computer can largely be attributed to the introduction of some form of parallelism into the machine's design: two or more operations which were performed one after the other in previous computers can now be performed simultaneously. See Computer systems architecture

Pipelining is a technique which allows several operations to be in progress in the central processing unit at once. The first form of pipelining used was instruction pipelining. Since each instruction must have the same basic sequence of steps performed, namely instruction fetch, instruction decode, operand fetch, and execution, it is feasible to construct an instruction pipeline, where each of these steps happens at a separate stage of the pipeline. The efficiency of the instruction pipeline depends on the likelihood that the program being executed allows a steady stream of instructions to be fetched from contiguous locations in memory.

The central processing unit nearly always has a much faster cycle time than the memory. This implies that the central processing unit is capable of processing data items faster than a memory unit can provide them. Interleaved memory is an organization of memory units which at least partially relieves this problem.

Parallelism within arithmetic and logical circuitry has been introduced in several ways. Adders, multipliers, and dividers now operate in bit-parallel mode, while the earliest machines performed bit-serial arithmetic. Independently operating parallel functional units within the central processing unit can each perform an arithmetic operation such as add, multiply, or shift. Array processing is a form of parallelism in which the instruction execution portion of a central processing unit is replicated several times and connected to its own memory device as well as to a common instruction interpretation and control unit. In this way, a single instruction can be executed at the same time on each of several execution units, each on a different set of operands. This kind of architecture is often referred to as single-instruction stream, multiple-data stream (SIMD).

Vector processing is the term applied to a form of pipelined arithmetic units which are specialized for performing arithmetic operations on vectors, which are uniform, linear arrays of data values. It can be thought of as a type of SIMD processing, since a single instruction invokes the execution of the same operation on every element of the array. See Computer programming, Programming languages

A central processing unit can contain multiple sets of the instruction execution hardware for either scalar or vector instructions. The task of scheduling instructions which can correctly execute in parallel with one another is generally the responsibility of the compiler or special scheduling hardware in the central processing unit. Instruction-level parallelism is almost never visible to the application programmer.

Multiprocessing is a form of parallelism that has complete central processing units operating in parallel, each fetching and executing instructions independently from the others. This type of computer organization is called multiple-instruction stream, multiple-data stream (MIMD). See Multiprocessing

McGraw-Hill Concise Encyclopedia of Engineering. © 2002 by The McGraw-Hill Companies, Inc.


A broad term for one of the fastest computers currently available. Such computers are typically used for number crunching including scientific simulations, (animated) graphics, analysis of geological data (e.g. in petrochemical prospecting), structural analysis, computational fluid dynamics, physics, chemistry, electronic design, nuclear energy research and meteorology. Perhaps the best known supercomputer manufacturer is Cray Research.

A less serious definition, reported from about 1990 at The University Of New South Wales states that a supercomputer is any computer that can outperform IBM's current fastest, thus making it impossible for IBM to ever produce a supercomputer.
This article is provided by FOLDOC - Free Online Dictionary of Computing (


The fastest computer currently available. Supercomputers are typically used for simulations in petroleum exploration and production, structural analysis, computational fluid dynamics, physics and chemistry, electronic design, nuclear energy research and meteorology.

From Two Million to Quadrillions
In 1964, executing two million instructions per second (2 MIPS), Control Data's CDC 6600 is considered the first supercomputer. Years ago, it took a supercomputer to perform real-time animation, but that changed as high-performance desktop computers became more capable. Today, supercomputers can have millions of processing cores, and their floating point calculations are measured in quadrillions of operations per second (see petaFLOPS). See supercomputer sites and grid computing.

Supercomputer vs. Quantum Computer Speed
Supercomputers may be the fastest, but although greatly enhanced, they essentially employ the same fundamental architecture in a desktop computer. For certain problems, the speed of quantum computers is expected to exceed supercomputers by orders of magnitude. See quantum supremacy and quantum computing.
Copyright © 1981-2019 by The Computer Language Company Inc. All Rights reserved. THIS DEFINITION IS FOR PERSONAL USE ONLY. All other reproduction is strictly prohibited without permission from the publisher.
References in periodicals archive ?
MDS Proteomics' functional approach to proteomics is designed to accelerate therapeutic development by combining protein pathway biology, ultra-sensitive mass spectrometry analysis and high performance super-computing to yield the most relevant targets for designing new medicines.
That challenge is most likely to be felt for operating systems that run server computers for web-pages and e-mail applications, and corporate super-computing.
New visualization techniques Research in the visualization of large datasets, where not all the data is available, has been followed by researcher Ping Fu at the National Center for Super-computing Applications (NCSA) at the Univ.
NCSA: The National Centre for Super-computing Applications, developers of the Mosaic browser for the World Wide Web.
Also new, among many, are participants from the super-computing, simulation, and network and information management communities.
But this and other chosen activities employ state-of-the-art super-computing visualization tools to help the investigator comprehend complex physical phenomena and their underlying principles.
Equipment additions include a super-computing workstation, which can convert customer CAD data into three-dimensional computer models and provide faster and more efficient analysis of mechanical and structural part performance, and a coordinate measuring machine, which provides accurate measurement of a wide variety of automotive components.
The advent of higher data rates, super-computing workstations and graphic chart services also was a factor.
David Kuck, director of the university's new Center for Super-computing Research, and his colleagues plan to build an experimental supercomputer, called the "Cedar" system, to test their ideas.
It will also be equipped with NantOmics' highly scalable cloud-based super-computing and storage infrastructure enabling analysis of genetic findings from a tumour sample at speeds allowing the analysis of multiple cancer patients per month.