The traditional use of shaders is to operate on data in the graphics pipeline to control the rendering of an image. Graphics shaders can be classified according to their position in the pipeline, the data being manipulated, and the graphics API being used.
Fragment shaders Fragment shaders, also known as
pixel shaders, compute
color and other attributes of each "fragment": a unit of rendering work affecting at most a single output
pixel. The simplest kinds of pixel shaders output one screen pixel as a color value; more complex shaders with multiple inputs/outputs are also possible. Pixel shaders range from simply always outputting the same color, to applying a
lighting value, to doing
bump mapping,
shadows,
specular highlights,
translucency and other phenomena. They can alter the depth of the fragment (for
Z-buffering), or output more than one color if multiple
render targets are active. In 3D graphics, a pixel shader alone cannot produce some kinds of complex effects because it operates only on a single fragment, without knowledge of a scene's geometry (i.e. vertex data). However, pixel shaders do have knowledge of the screen coordinate being drawn, and can sample the screen and nearby pixels if the contents of the entire screen are passed as a texture to the shader. This technique can enable a wide variety of two-dimensional
postprocessing effects such as
blur, or
edge detection/enhancement for
cartoon/cel shaders. Pixel shaders may also be applied in
intermediate stages to any two-dimensional images—
sprites or
textures—in the
pipeline, whereas
vertex shaders always require a 3D scene. For instance, a pixel shader is the only kind of shader that can act as a
postprocessor or
filter for a
video stream after it has been
rasterized.
Vertex shaders Vertex shaders are run once for each 3D
vertex given to the graphics processor. The purpose is to transform each vertex's 3D position in virtual space to the 2D coordinate at which it appears on the screen (as well as a depth value for the Z-buffer). Vertex shaders can manipulate properties such as position, color and texture coordinates, but cannot create new vertices. The output of the vertex shader goes to the next stage in the pipeline, which is either a geometry shader if present, or the
rasterizer. Vertex shaders can enable powerful control over the details of position, movement, lighting, and color in any scene involving
3D models.
Geometry shaders Geometry shaders were introduced in Direct3D 10 and OpenGL 3.2; formerly available in OpenGL 2.0+ with the use of extensions. This type of shader can generate new graphics
primitives, such as points, lines, and triangles, from those primitives that were sent to the beginning of the
graphics pipeline. Geometry shader programs are executed after vertex shaders. They take as input a whole primitive, possibly with adjacency information. For example, when operating on triangles, the three vertices are the geometry shader's input. The shader can then emit zero or more primitives, which are rasterized and their fragments ultimately passed to a
pixel shader. Typical uses of a geometry shader include point sprite generation, geometry
tessellation,
shadow volume extrusion, and single pass rendering to a
cube map. A typical real-world example of the benefits of geometry shaders would be automatic mesh complexity modification. A series of line strips representing control points for a curve are passed to the geometry shader and depending on the complexity required the shader can automatically generate extra lines each of which provides a better approximation of a curve.
Tessellation shaders As of OpenGL 4.0 and Direct3D 11, a new shader class called a tessellation shader has been added. It adds two new shader stages to the traditional model: tessellation control shaders (also known as hull shaders) and tessellation evaluation shaders (also known as Domain Shaders), which together allow for simpler meshes to be subdivided into finer meshes at run-time according to a mathematical function. The function can be related to a variety of variables, most notably the distance from the viewing camera to allow active
level-of-detail scaling. This allows objects close to the camera to have fine detail, while further away ones can have more coarse meshes, yet seem comparable in quality. It also can drastically reduce required mesh bandwidth by allowing meshes to be refined once inside the shader units instead of downsampling very complex ones from memory. Some algorithms can upsample any arbitrary mesh, while others allow for "hinting" in meshes to dictate the most characteristic vertices and edges.
Primitive and Mesh shaders Circa 2017, the
AMD Vega microarchitecture added support for a new shader stage—primitive shaders—somewhat akin to compute shaders with access to the data necessary to process geometry. Nvidia introduced mesh and task shaders with its
Turing microarchitecture in 2018 which are also modelled after compute shaders. Nvidia Turing is the world's first GPU microarchitecture that supports mesh shading through DirectX 12 Ultimate API, several months before Ampere RTX 30 series was released. In 2020, AMD and Nvidia released
RDNA 2 and
Ampere microarchitectures which both support mesh shading through
DirectX 12 Ultimate. These mesh shaders allow the GPU to handle more complex algorithms, offloading more work from the CPU to the GPU, and in algorithm intense rendering, increasing the frame rate of or number of triangles in a scene by an order of magnitude. Intel announced that Intel Arc Alchemist GPUs shipping in Q1 2022 will support mesh shaders.
Ray-tracing shaders Ray tracing shaders are supported by
Microsoft via
DirectX Raytracing, by
Khronos Group via
Vulkan,
GLSL, and
SPIR-V, by
Apple via
Metal.
Nvidia and
AMD called the parts of the hardware responsible for executing these shaders "ray tracing cores". == Compute kernels ==