16-bit computing

(redirected from 16-bit processing)

16-bit computing

CPUs that process 16 bits as a single unit, compared to 8, 32 or 64. The first personal computers in the late 1970s used 8-bit CPUs but migrated to 16 bits with the IBM PC in 1981. In the mid-1980s, PCs jumped to 32 bits with the Intel 386, and the Mac debuted with the 32-bit Motorola 68000 CPU. See 8088, 386 and 68000.

The 16-bit CPUs are still used as embedded processors in myriad products that do not require the higher speed. However, over the years, a lot of design effort went into 32-bit CPUs, making them faster, more efficient, smaller and less expensive and competitive with 16-bit CPUs for numerous embedded applications. See 8-bit computing, 32-bit computing and bit specifications.


References in periodicals archive ?
The C8 is powered by the DX image processors, which offer high-quality scaling, an unlimited pixel count (by stacking multiple processors), auto redundancy, and 16-bit processing per color.
State-of-the-art DSPs - referred to as "vecom" chips in Europe have faster processing speeds than ever and 16-bit processing power, which has led to finer control algorithms.