32-bit computing

(redirected from 32 bits)

32-bit computing

CPUs that process 32 bits as a single unit, compared to 8, 16 or 64. Although 32-bit CPUs were used in mainframes as early as the 1960s, personal computers began to migrate from 16 to 32 bits in the 1980s. Starting with the first 32-bit 386 chips in 1985, Intel x86 CPUs were built with a 16-bit mode for compatibility with 16-bit applications (see 386).

The 32-bit mode does not result in two times as much real work getting done as in 16-bit mode, because it relates to only one aspect of internal processing. The CPU's clock speed, along with the speed, size and architecture of the disks, memory and peripheral bus all play important roles in a computer's performance (see throughput). See 64-bit computing and bit specifications.


References in periodicals archive ?
Until recently, almost all consumer PCs used a 32-bit processor that has registers with an address space 32 bits wide.
Until recently, almost all consumer PCs used a 32-bit processor which has registers With an address space 32 bits wide.
- 32-bit arithmetic operation, logic operation (ALU) - 32-bit multiply accumulation operation (MAC): 32x32 + 32 = 64 bits - 32-bit shifter: 32 bits --> 32 bits
With this encoding, default operand size is 32 bits and default address size is 64 bits.
Although the new devices are single-chip MCUs, engineers can extend their capabilities by using the external data bus to a maximum of 32 bits. The on-chip direct memory access (DMA) controller can then be used to transfer large volumes of data at high-speeds without imposing a load on the CPU.
(Theoretically, the Itanium can address 18 exabytes of memory, or 18 billion gigabytes.) This benefit becomes patently obvious when processing very large problems that don't fit in the 4-GB limitation that 32 bits puts on system memory.