approximate computing

approximate computing

A computer processor that does not compute a precise result. For example, adding 1 plus 1 may yield 2.01 or 1.98, but not 2. For many applications, including imaging and artificial intelligence, "almost correct" is good enough, and such chips use fewer circuits and much less energy.
Copyright © 1981-2019 by The Computer Language Company Inc. All Rights reserved. THIS DEFINITION IS FOR PERSONAL USE ONLY. All other reproduction is strictly prohibited without permission from the publisher.
References in periodicals archive ?
But Deb Roy, a professor at the MIT Media Lab and Twitter's chief media scientist, says that approximate computing may find a readier audience than ever.
The third is approximate computing, tolerating errors, with fault tolerance.
Full browser ?