CUDA

(redirected from Compute Unified Device Architecture)
Also found in: Acronyms.

CUDA

The architecture of the NVIDIA graphics processing unit (GPU), starting with its GeForce 8 chips. The CUDA programming interface (API) exposes the inherent parallel processing capabilities of the GPU to the developer and enables scientific and financial applications to be run on the graphics GPU chip rather than the CPU (see GPGPU).

CUDA also supports NVIDIA's PhysX physics simulation algorithms for game developers who want to create more realism in their video games (see PhysX). CUDA was originally an acronym for Compute Unified Device Architecture.

CUDA C/C++ and CUDA Fortran
CUDA operations are programmed in traditional programming languages. C/C++ and Fortran source code is compiled with NVIDIA's own CUDA compilers for each language. The CUDA Fortran compiler was developed by the Portland Group (PGI), which was acquired by NVIDIA. See GPU.
References in periodicals archive ?
In this paper, I describe the design of efficient sorting algorithms for GPUs that implement the Compute Unified Device Architecture.
The introduction of the NVIDIA Compute Unified Device Architecture made possible to accelerate and improve a large suite of applications.
Acronyms API Application Programming Interface | CTM Close to Metal | CUDA Compute Unified Device Architecture | FFTs Fast Fourier Transforms | GPGPU General Purpose Graphics Processing Unit | SDK Software Development Kit