Path tracing has difficulty rendering
caustics – bright patches of light that appear when a small or highly directional light source (such as the sun) is
reflected or
refracted by a shiny surface or transparent object before illuminating another surface. The result is usually very noisy, with many "fireflies" (stray bright pixels), or caustics may simply be missing. Bidirectional path tracing can render caustics if light subpaths are allowed to contribute to any pixel in the image (breaking the pixel-by-pixel rendering model), but caustics may still be missing if viewed in a mirror. This type of difficult lighting can also cause noise elsewhere in the image. A different rendering technique called
photon mapping or
photon density estimation is able to render caustics more effectively. Photon mapping works like the "forward" or "particle tracing" portion of bidirectional path tracing, simulating the paths of many "packets" of light emitted by a light source. These are called
photons in the algorithm, but they usually do not actually represent physical photons. Photons are traced through reflections and refractions until they hit a
diffuse (
matte) surface or leave the scene. Information about photons that hit a surface is accumulated in a data structure called a
photon map and used later, rather than being used immediately for rendering a particular pixel. The photon map is implemented using a spatial structure such as a
k-d tree or hash grid. Photon mapping is
biased, because it uses a
density estimation filter to smooth out the data and allow estimating the intensity of light at an arbitrary point on a surface. If an insufficient number of light samples is used, the image may have a blurry or blotchy appearance caused by the filter. Photon mapping is usually combined with some form of
ray tracing, allowing precise direct illumination and reflection and refraction of camera rays, while photon maps are used for only for caustics and/or diffuse
global illumination (often with separate maps for each, because caustics require denser sampling). One early variation of photon mapping used a photon map to "guide" rays for global illumination (which was rendered using distribution ray tracing), instead of using the radiance information in the map directly. This is similar to
path guiding in modern path tracing. Another method explored was the use of data from photon maps as
control variates for reducing the variance of path tracing. Both of these approaches can be unbiased.
Progressive photon mapping An algorithm called
progressive photon mapping is optimized to allow tracing a larger number of photons, with the number not limited by available memory. Tracing more photons can reduce bias. Rather than beginning with photon mapping, it starts with a "camera pass", in which paths are traced from the camera until they hit a diffuse surface. Information about these subpaths (called
visible points) is stored in a spatial data structure. In the photon mapping pass, subpaths from the light source are traced until they hit a diffuse surface, and any nearby subpaths from the camera pass are then found. The camera subpaths determine how much light from the light subpath is added to each pixel in the image. Instead of consuming memory by storing photons (light subpaths) in a
k-d tree (or other spatial structure), progressive photon mapping stores visible points (camera subpaths), which is still a problem because a path traced image usually requires a large number of camera rays for each pixel. A variation called
stochastic progressive photon mapping makes the algorithm more practical by breaking rendering into multiple pairs of camera and light passes, storing typically only a single camera subpath for each pixel at a time. The algorithm also tracks the number of nearby photons found for each pixel's visible points, and reduces the size of the filter kernel in subsequent passes, which allows rendering sharp caustics even though the photons are traced in smaller batches.
Unified frameworks One problem with photon mapping approaches is that surfaces must usually be divided into specular (glossy) and diffuse surfaces, with different strategies used for each, and this classification may not always be optimal. Later techniques called
vertex connection and merging (VCM) and
unified path sampling (UPS) create frameworks that allow using both photon mapping and standard path tracing on any surface, and attempt to reduce bias further. The unified frameworks are forms of bidirectional path tracing (BDPT), based on combining paths generated by different strategies. The same path can potentially be generated by multiple strategies, and so paths must be blended carefully, using multiple importance sampling (MIS) weights. Photon mapping samples are represented as paths where a subpath from the camera and a subpath from a light source have been joined at nearby (but not exactly matching) points on a surface, or as paths where a vertex on a surface has been perturbed using a probability distribution corresponding to the filter kernel that was used to smooth photon density estimation. Because the sampling probabilities of all types of paths can be computed (whether or not they use the photon mapping strategy), they can be blended using MIS weights, as in standard BDPT. The algorithm is still biased but will converge to the correct result faster than other forms of photon mapping as the number of light subpaths increases. Reusing the same light subpath for multiple pixels (as is done in most forms of photon mapping) can cause correlations between nearby pixels, which may produce visible artifacts and cause problems for denoisers. The VCM method reduces this correlation by randomly modifying each light subpath before using it. ==See also==