Amdahl's law


Also found in: Dictionary, Thesaurus, Legal, Wikipedia.

Amdahl's law

[′am‚dälz ‚lȯ]
(computer science)
A law stating that the speed-up that can be achieved by distributing a computer program over p processors cannot exceed 1/{f + [1 -f)/ p ]}, where f is the fraction of the work of the program that must be done in serial mode.
McGraw-Hill Dictionary of Scientific & Technical Terms, 6E, Copyright © 2003 by The McGraw-Hill Companies, Inc.

Amdahl's Law

(parallel)
(Named after Gene Amdahl) If F is the fraction of a calculation that is sequential, and (1-F) is the fraction that can be parallelised, then the maximum speedup that can be achieved by using P processors is 1/(F+(1-F)/P).

[Gene Amdahl, "Validity of the Single Processor Approach to Achieving Large-Scale Computing Capabilities", AFIPS Conference Proceedings, (30), pp. 483-485, 1967].
This article is provided by FOLDOC - Free Online Dictionary of Computing (foldoc.org)

Amdahl's law

"Overall system speed is governed by the slowest component," coined by Gene Amdahl, chief architect of IBM's first mainframe series and founder of Amdahl Corporation and other companies. Amdahl's law applied to networking. The slowest device in the network will determine the maximum speed of the network. See laws.
Copyright © 1981-2019 by The Computer Language Company Inc. All Rights reserved. THIS DEFINITION IS FOR PERSONAL USE ONLY. All other reproduction is strictly prohibited without permission from the publisher.
Mentioned in ?
References in periodicals archive ?
Amdahl's Law [5] is one of the few fundamental laws of computing that contribute to systems' performance enhancement.
RELATED ARTICLE: Promise and Limits of Amdahl's law and Moore's Law
This equation Amdahl's law on basis of computing-centric system which never takes into account the potential cost of data preparation.
One of these programmers even wrote a paper explaining why Amdahl's law is irrelevant.
We propose a load balancing technique that can overcome the performance limitations of Amdahl's law. In order to reduce the idle time caused by the data dependency in the second workload, we distribute the data-independent first workload unevenly across the cores and keep the data dependency in the second workload.
The reason for this lies in Amdahl's law. The most important inherently sequential part in our program is the quantization control.
We will start with Amdahl's Law [1] which in its simplest form says that
The distrust of an achievable large speedup from the massively parallel system is raised mainly from Amdahl's law. Amdahl's law indicates that the maximum speedup, even on a parallel system with an infinite number of processors, cannot exceed 1/k, where k is the fraction of operations that cannot be executed in parallel.
We found that we had ignored, at our peril, Amdahl's law. According to an argument presented by Gene Amdahl in 1967, the speedup possible with multiple processors is speedup = (s + p)/(s + p/N) where s is the time spent running the serial portion of the program, p is the time spent on the parallel portion, and N is the number of processors.
In his technical note, "Reevaluating Amdahl's Law," in the May 1988 Communications (pp.
The harmonic mean is in accord with Amdahl's law, which, when applied to this example, asserts that making the second program infinitely fast will only have the total time used by both programs.