The
graphics rendering pipeline ("rendering pipeline" or simply "pipeline") is the foundation of real-time graphics. Its main function is to render a two-dimensional image in relation to a virtual camera, three-dimensional objects (an object that has width, length, and depth), light sources, lighting models, textures and more.
Architecture The architecture of the real-time rendering pipeline can be divided into conceptual stages: application, geometry and
rasterization.
Application stage The application stage is responsible for generating "scenes", or 3D settings that are drawn to a 2D display. This stage is implemented in software that developers optimize for performance. This stage may perform processing such as
collision detection, speed-up techniques, animation and force feedback, in addition to handling user input. Collision detection is an example of an operation that would be performed in the application stage. Collision detection uses algorithms to detect and respond to collisions between (virtual) objects. For example, the application may calculate new positions for the colliding objects and provide feedback via a force feedback device such as a vibrating game controller. The application stage also prepares graphics data for the next stage. This includes texture animation, animation of 3D models, animation via
transforms, and geometry morphing. Finally, it produces
primitives (points, lines, and triangles) based on scene information and feeds those primitives into the geometry stage of the pipeline.
Geometry stage The geometry stage manipulates polygons and vertices to compute what to draw, how to draw it and where to draw it. Usually, these operations are performed by specialized hardware or GPUs. Variations across graphics hardware mean that the "geometry stage" may actually be implemented as several consecutive stages.
Model and view transformation Before the final model is shown on the output device, the model is transformed onto multiple spaces or
coordinate systems. Transformations move and manipulate objects by altering their vertices.
Transformation is the general term for the four specific ways that manipulate the shape or position of a point, line or shape.
Lighting In order to give the model a more realistic appearance, one or more light sources are usually established during transformation. However, this stage cannot be reached without first transforming the 3D scene into view space. In view space, the observer (camera) is typically placed at the origin. If using a
right-handed coordinate system (which is considered standard), the observer looks in the direction of the negative z-axis with the y-axis pointing upwards and the x-axis pointing to the right.
Projection Projection is a transformation used to represent a 3D model in a 2D space. The two main types of projection are
orthographic projection (also called parallel) and
perspective projection. The main characteristic of an orthographic projection is that parallel lines remain parallel after the transformation. Perspective projection utilizes the concept that if the distance between the observer and model increases, the model appears smaller than before. Essentially, perspective projection mimics human sight.
Clipping Clipping is the process of removing primitives that are outside of the view box in order to facilitate the rasterizer stage. Once those primitives are removed, the primitives that remain will be drawn into new triangles that reach the next stage.
Screen mapping The purpose of screen mapping is to find out the coordinates of the primitives during the clipping stage.
Rasterizer stage The rasterizer stage applies color and turns the graphic elements into pixels or picture elements. == See also ==