graphics pipeline

Also found in: Dictionary, Thesaurus, Medical, Legal, Financial, Wikipedia.

graphics pipeline

In 3D graphics rendering, the stages required to transform a three-dimensional image into a two-dimensional screen. The stages are responsible for processing information initially provided just as properties at the end points (vertices) or control points of the geometric primitives used to describe what is to be rendered. The typical primitives in 3D graphics are lines and triangles. The type of properties provided per vertex include x-y-z coordinates, RGB values, translucency, texture, reflectivity and other characteristics.

An Assembly Line
Graphics rendering is like a manufacturing assembly line with each stage adding something to the previous one. Within a graphics processor, all stages are working in parallel. Because of this pipeline architecture, today's graphics processing units (GPUs) perform billions of geometry calculations per second. They are increasingly designed with more memory and more stages, so that more data can be worked on at the same time.

The Goal
For gamers, photorealistic rendering at full speed is the goal, and human skin and facial expressions are the most difficult. Although there are always faster adapters on the market with more memory and advanced circuitry that render 3D action more realistically, thus far, no game has fooled anyone into believing a real person is on screen, except perhaps for a few seconds.

The Pipeline
These are the various stages in the typical pipeline of a modern graphics processing unit (GPU). (Illustration courtesy of NVIDIA Corporation.)

Bus interface/Front End
Interface to the system to send and receive data and commands.

Vertex Processing
Converts each vertex into a 2D screen position, and lighting may be applied to determine its color. A programmable vertex shader enables the application to perform custom transformations for effects such as warping or deformations of a shape.

This removes the parts of the image that are not visible in the 2D screen view such as the backsides of objects or areas that the application or window system covers.

Primitive Assembly, Triangle Setup
Vertices are collected and converted into triangles. Information is generated that will allow later stages to accurately generate the attributes of every pixel associated with the triangle.

The triangles are filled with pixels known as "fragments," which may or may not wind up in the frame buffer if there is no change to that pixel or if it winds up being hidden.

Occlusion Culling
Removes pixels that are hidden (occluded) by other objects in the scene.

Parameter Interpolation
The values for each pixel that were rasterized are computed, based on color, fog, texture, etc.

Pixel Shader
This stage adds textures and final colors to the fragments. Also called a "fragment shader," a programmable pixel shader enables the application to combine a pixel's attributes, such as color, depth and position on screen, with textures in a user-defined way to generate custom shading effects.

Pixel Engines
Mathematically combine the final fragment color, its coverage and degree of transparency with the existing data stored at the associated 2D location in the frame buffer to produce the final color for the pixel to be stored at that location. Output is a depth (Z) value for the pixel.

Frame Buffer Controller
The frame buffer controller interfaces to the physical memory used to hold the actual pixel values displayed on screen. The frame buffer memory is also often used to store graphics commands, textures as well as other attributes associated with each pixel.
References in periodicals archive ?
With the ability to scale from four to 32 CPUs and drive up to eight full graphics pipelines and eight simultaneous graphics users, the SGI Onyx 3400 is designed to meet the most demanding and changing needs of customers such as the U.
0 has been designed to run both in software implementations as small as 50Kbytes, and also to enable hardware graphics pipeline acceleration on both fixed point and floating point systems.
The scalability of the Onyx(R) platform allows researchers and educators to drive multiple display devices simultaneously, each with its own dedicated graphics pipeline -- Onyx facilitates growing the system to meet demand.
Number Nine's graphics card uses a dual graphics pipeline and dedicated digital path from the graphics card to the display, delivering superior image quality and consistency for smooth video-conferencing, full-screen MPEG video, professional applications or game development.
NVIDIA Quadro FX graphics boards feature the industry's only true 128-bit floating-point graphics pipeline for millions of colors in a broad dynamic range, stunning visual quality, and the highest degree of accuracy.
The Wildcat VP500 VPU includes the first deployment of 3Dlabs' patent-pending Slipstream(TM) graphics technology that significantly boosts the rendering efficiency of a standard graphics pipeline through advanced visibility culling and spatial cache optimizations.
The NVIDIA Quadro FX 600 PCI features the industry's only true 128-bit floating-point graphics pipeline providing images with millions of colors in a broad dynamic range, stunning visual quality, and the highest degree of accuracy.
Current 3Dlabs products support the AMD Athlon MP processor with optimized drivers to accelerate the 3D graphics pipeline between the processor and the Wildcat graphics card.
The NVIDIA Quadro FX 700 features the industry's only true 128-bit floating-point graphics pipeline for millions of colors in a broad dynamic range, stunning visual quality, and the highest degree of accuracy.
3Dlabs' Visual Processing Architecture implements an optimized graphics pipeline, replacing previously inflexible pipeline stages with highly programmable SIMD (single instruction, multiple data) processor arrays.
The NVIDIA Quadro FX 700 by PNY Technologies also includes: -- The industry's only true 128-bit floating-point graphics pipeline for millions of colors in a broad dynamic range, stunning visual quality and the highest degree of accuracy; -- 12 bits of sub-pixel precision to ensure the elimination of sparkles, cracks and other rasterization anomalies; -- Hardware antialiasing, which eliminates the visual impact of "jaggies" in 3D models; -- Multiple clip regions increases transfer speed between color buffer and frame buffer; and -- NVIDIA's Unified Driver Architecture that makes large-scale system deployment and unattended installations easy.

Full browser ?