approximate computing

approximate computing

A computer processor that does not compute a precise result. For example, adding 1 plus 1 may yield 2.01 or 1.98, but not 2. For many applications, including imaging and artificial intelligence, "almost correct" is good enough, and such chips use fewer circuits and much less energy.
References in periodicals archive ?
But Deb Roy, a professor at the MIT Media Lab and Twitter's chief media scientist, says that approximate computing may find a readier audience than ever.
The third is approximate computing, tolerating errors, with fault tolerance.
Full browser ?