Various techniques have evolved in software and hardware implementations. Each offers different trade-offs in precision, versatility, and performance.
Affine texture mapping ''''
linearly interpolates texture coordinates across a surface, making it the fastest form of texture mapping. Some software and hardware (such as the original PlayStation) project vertices in 3D space onto the screen during rendering and linearly interpolate the texture coordinates in screen space'' between them. This may be done by incrementing
fixed-point UV coordinates or by an
incremental error algorithm akin to
Bresenham's line algorithm. In contrast to perpendicular polygons, this leads to noticeable distortion with perspective transformations (as shown in the figure: the checker box texture appears bent), especially as primitives near the
camera. This distortion can be reduced by subdividing polygons into smaller polygons. Using quad primitives for rectangular objects can look less incorrect than if those rectangles were split into triangles. However, since interpolating four points adds complexity to the rasterization, most early implementations preferred triangles only. Some hardware, such as the
forward texture mapping used by the Nvidia
NV1, offered efficient quad primitives. With perspective correction, triangles become equivalent to quad primitives and this advantage disappears. For rectangular objects that are at right angles to the viewer (like floors and walls), the perspective only needs to be corrected in one direction across the screen rather than both. The correct perspective mapping can be calculated at the left and right edges of the floor. Affine linear interpolation across that horizontal span will look correct because every pixel along that line is the same distance from the viewer.
Perspective correctness '''''' accounts for the vertices' positions in 3D space rather than simply interpolating coordinates in 2D screen space. While achieving the correct visual effect, perspective correct texturing is more expensive to calculate. This correction makes it so that the difference from pixel to pixel between texture coordinates is smaller in parts of the polygon that are closer to the viewer (stretching the texture wider) and is larger in parts that are farther away (compressing the texture). Affine texture mapping directly interpolates a texture coordinate u_{\alpha} between two endpoints u_0 and u_1: u_{\alpha}= (1 - \alpha ) u_0 + \alpha u_1 where 0 \le \alpha \le 1. Perspective correct mapping interpolates after dividing by depth z, then uses its interpolated reciprocal to recover the correct coordinate: u_{\alpha}= \frac{ (1 - \alpha ) \frac{ u_0 }{ z_0 } + \alpha \frac{ u_1 }{ z_1 } }{ (1 - \alpha ) \frac{ 1 }{ z_0 } + \alpha \frac{ 1 }{ z_1 } }3D graphics hardware typically supports perspective correct texturing. Various techniques have evolved for rendering texture mapped geometry into images with different quality and precision trade-offs, which can be applied to both software and hardware. Classic software texture mappers generally only performed simple texture mapping with one lighting effect at most (typically applied through a
lookup table), and the perspective correctness was about 16 times more expensive.
Restricted camera rotation did not permit ramped floors or slanted walls. This requires only one perspective correction per horizontal or vertical span rather than one per-pixel. The
Doom engine restricted the world to vertical walls and horizontal floors and ceilings, with a camera that could only rotate about the vertical axis. This meant the walls would be a constant depth coordinate along a vertical line and the floors and ceilings would have a constant depth along a horizontal line. After performing one perspective correction calculation for the depth, the rest of the line could use fast affine mapping. Some later renderers of this era simulated a small amount of camera
pitch with
shearing which allowed the appearance of greater freedom while using the same rendering technique. Some engines were able to render texture mapped
heightmaps (e.g.
Nova Logic's
Voxel Space, and the engine for
Outcast) via
Bresenham-like incremental algorithms, producing the appearance of a texture mapped landscape without the use of traditional geometric primitives.
Subdivision for perspective correction Every triangle can be further subdivided into groups of about 16 pixels in order to achieve two goals: keeping the arithmetic mill busy at all times and producing faster arithmetic results.
World space subdivision For perspective texture mapping without hardware support, a triangle is broken down into smaller triangles for rendering and affine mapping is used on them. The reason this technique works is that the distortion of affine mapping becomes much less noticeable on smaller polygons. The
Sony PlayStation made extensive use of this because it only supported affine mapping in hardware and had a relatively high triangle throughput compared to its peers.
Screen space subdivision Software renderers generally prefer screen subdivision because it has less overhead. Additionally, they try to do linear interpolation along a line of pixels to simplify the set-up (compared to 2D affine interpolation), thus lessening the overhead further. Another reason is that affine texture mapping does not fit into the low number of
CPU registers of the
x86 CPU; the
68000 and
RISC processors are much more suited for that approach. A different approach was taken for
Quake, which would calculate perspective correct coordinates only once every 16 pixels of a scanline and linearly interpolate between them, effectively running at the speed of linear interpolation because the perspective correct calculation runs in parallel on the co-processor. As the polygons are rendered independently, it may be possible to switch between spans and columns or diagonal directions depending on the orientation of the
polygon normal to achieve a more constant z, but the effort seems not to be worth it.
Other techniques One other technique is to approximate the perspective with a faster calculation, such as a polynomial. A second uses the \frac{1}{z_i} value of the last two drawn pixels to linearly extrapolate the next value. For the latter, the division is then done starting from those values so that all that has to be divided is a small remainder. However, the amount of bookkeeping needed makes this technique too slow on most systems. A third technique, used by the
Build Engine (used, most notably, in
Duke Nukem 3D), builds on the constant distance trick used by the
Doom engine by finding and rendering along the line of constant distance for arbitrary polygons.
Hardware implementations Texture mapping hardware was originally developed for simulation (e.g. as implemented in the
Evans and Sutherland ESIG and Singer-Link Digital Image Generators DIG) and professional
graphics workstations (such as
Silicon Graphics) and broadcast
digital video effects machines such as the
Ampex ADO. Texture mapping hardware later appeared in
arcade cabinets, consumer
video game consoles, and PC
video cards in the mid-1990s. In
flight simulations, texture mapping provided important motion and altitude cues necessary for pilot training not available on untextured surfaces. Additionally, texture mapping was implemented so that real-time processing of prefiltered texture patterns stored in memory could be accessed by the video processor in real-time. Modern
graphics processing units (GPUs) provide specialised
fixed function units called
texture samplers, or
texture mapping units, to perform texture mapping, usually with
trilinear filtering or better multi-tap
anisotropic filtering and hardware for decoding specific formats such as
DXTn. As of 2016, texture mapping hardware is ubiquitous as most
SOCs contain a suitable GPU. Some hardware implementations combine texture mapping with
hidden-surface determination in
tile-based deferred rendering or
scanline rendering; such systems only fetch the visible
texels at the expense of using greater workspace for transformed vertices. Most systems have settled on the
z-buffering approach, which can still reduce the texture mapping workload with front-to-back
sorting. On earlier graphics hardware, there were two competing paradigms of how to deliver a texture to the screen: • Forward texture mapping iterates through each texel on the texture and decides where to place it on the screen. • Inverse texture mapping instead iterates through pixels on the screen and decides what texel to use for each. Of these methods, inverse texture mapping has become standard in modern hardware.
Inverse texture mapping With this method, a pixel on the screen is mapped to a point on the texture. Each vertex of a
rendering primitive is projected to a point on the screen, and each of these points is
mapped to a u,v texel coordinate on the texture. A rasterizer will interpolate between these points to fill in each pixel covered by the primitive. The primary advantage of this method is that each pixel covered by a primitive will be traversed exactly once. Once a primitive's vertices are transformed, the amount of remaining work scales directly with how many pixels it covers on the screen. The main disadvantage is that the
memory access pattern in the
texture space will not be linear if the texture is at an angle to the screen. This disadvantage is often addressed by
texture caching techniques, such as the
swizzled texture memory arrangement. The linear interpolation can be used directly for simple and efficient
affine texture mapping, but can also be adapted for
perspective correctness.
Forward texture mapping Forward texture mapping maps each texel of the texture to a pixel on the screen. After transforming a rectangular primitive to a place on the screen, a forward texture mapping renderer iterates through each texel on the texture,
splatting each one onto a pixel of the
frame buffer. This was used by some hardware, such as the
3DO, the
Sega Saturn and the
NV1. The primary advantage is that the texture will be accessed in a simple linear order, allowing very efficient caching of the texture data. However, this benefit is also its disadvantage: as a primitive gets smaller on screen, it still has to iterate over every texel in the texture, causing many pixels to be overdrawn redundantly. This method is also well suited for rendering quad primitives rather than reducing them to triangles, which provided an advantage when perspective correct texturing was not available in hardware. This is because the affine distortion of a quad looks less incorrect than the same quad split into two triangles . The NV1 hardware also allowed a quadratic interpolation mode to provide an even better approximation of perspective correctness.
UV mapping became an important technique for 3D modelling and assisted in
clipping the texture correctly when the primitive went past the edge of the screen, but existing hardware did not provide effective implementations of this. These shortcomings could have been addressed with further development, but GPU design has mostly shifted toward using the inverse mapping technique. ==Applications==