MarketPath tracing
Company Profile

Path tracing

Path tracing is a rendering algorithm in computer graphics that simulates how light interacts with objects and participating media to generate realistic images. It is based on earlier, more limited, ray tracing algorithms.

History
The rendering equation and its use in computer graphics was presented by James Kajiya in 1986. Path tracing was introduced then as an algorithm to find a numerical solution to the integral of the rendering equation. A decade later, Lafortune suggested many refinements, including bidirectional path tracing. Metropolis light transport, a method of perturbing previously found paths in order to increase performance for difficult scenes, was introduced in 1997 by Eric Veach and Leonidas J. Guibas. More recently, CPUs and GPUs have become powerful enough to render images more quickly, causing more widespread interest in path tracing algorithms. Tim Purcell first presented a global illumination algorithm running on a GPU in 2002. In February 2009, Austin Robison of Nvidia demonstrated the first commercial implementation of a path tracer running on a GPU, and other implementations have followed, such as that of Vladimir Koylazov in August 2009. This was aided by the maturing of GPGPU programming toolkits such as CUDA and OpenCL and GPU ray tracing SDKs such as OptiX. Path tracing has played an important role in the film industry. Earlier films had relied on scanline rendering to produce CG visual effects and animation. In 1998, Blue Sky Studios rendered the Academy Award-winning short film Bunny with their proprietary CGI Studio path tracing renderer, featuring soft shadows and indirect illumination effects. Sony Pictures Imageworks' Monster House was, in 2006, the first animated feature film to be rendered entirely in a path tracer, using the commercial Arnold renderer. Also, Walt Disney Animation Studios has been using its own optimized path tracer known as Hyperion ever since the production of Big Hero 6 in 2014. Pixar Animation Studios has also adopted path tracing for its commercial RenderMan renderer. ==Description==
Description
Kajiya's rendering equation adheres to three particular principles of optics; the Principle of Global Illumination, the Principle of Equivalence (reflected light is equivalent to emitted light), and the Principle of Direction (reflected light and scattered light have a direction). In the real world, objects and surfaces are visible due to the fact that they are reflecting light. This reflected light then illuminates other objects in turn. From that simple observation, two principles follow. I. For a given indoor scene, every object in the room must contribute illumination to every other object. II. Second, there is no distinction to be made between illumination emitted from a light source and illumination reflected from a surface. Invented in 1984, a rather different method called radiosity was faithful to both principles. However, radiosity relates the total illuminance falling on a surface with a uniform luminance that leaves the surface. This forced all surfaces to be Lambertian, or "perfectly diffuse". While radiosity received a lot of attention at its introduction, perfectly diffuse surfaces do not exist in the real world. The realization that scattering from a surface depends on both incoming and outgoing directions is the key principle behind the bidirectional reflectance distribution function (BRDF). This direction dependence was a focus of research resulting in the publication of important ideas throughout the 1990s, since accounting for direction always exacted a price of steep increases in calculation times on desktop computers. Principle III follows. III. The illumination coming from surfaces must scatter in a particular direction that is some function of the incoming direction of the arriving illumination, and the outgoing direction being sampled. Kajiya's equation is a complete summary of these three principles, and path tracing, which approximates a solution to the equation, remains faithful to them in its implementation. There are other principles of optics which are not the focus of Kajiya's equation, and therefore are often difficult or incorrectly simulated by the algorithm. Path tracing is confounded by optical phenomena not contained in the three principles. For example, • Bright, sharp caustics; radiance scales by the density of illuminance in space. • Subsurface scattering; a violation of Principle III above. • Chromatic aberration, fluorescence, iridescence; light is a spectrum of frequencies. ==Algorithm==
Algorithm
The following pseudocode is a procedure for performing naive path tracing. The TracePath function calculates a single sample of a pixel, where only the Gathering Path is considered. Color TracePath(Ray ray, count depth) { if (depth >= MaxDepth) { return Black; // Bounced enough times. } ray.FindNearestObject(); if (ray.hitSomething == false) { return Black; // Nothing was hit. } Material material = ray.thingHit->material; Color emittance = material.emittance; // Pick a random direction from here and keep going. Ray newRay; newRay.origin = ray.pointWhereObjWasHit; // This is NOT a cosine-weighted distribution! newRay.direction = RandomUnitVectorInHemisphereOf(ray.normalWhereObjWasHit); // Probability of the newRay const float p = 1 / (2 * PI); // Compute the BRDF for this ray (assuming Lambertian reflection) float cos_theta = DotProduct(newRay.direction, ray.normalWhereObjWasHit); Color BRDF = material.reflectance / PI; // Recursively trace reflected light sources. Color incoming = TracePath(newRay, depth + 1); // Apply the Rendering Equation here. return emittance + (BRDF * incoming * cos_theta / p); } void Render(Image finalImage, count numSamples) { foreach (pixel in finalImage) { foreach (i in numSamples) { Ray r = camera.generateRay(pixel); pixel.color += TracePath(r, 0); } pixel.color /= numSamples; // Average samples. } } All the samples are then averaged to obtain the output color. Note this method of always sampling a random ray in the normal's hemisphere only works well for perfectly diffuse surfaces. For other materials, one generally has to use importance sampling, i.e. probabilistically select a new ray according to the BRDF's distribution. For instance, a perfectly specular (mirror) material would not work with the method above, as the probability of the new ray being the correct reflected ray – which is the only ray through which any radiance will be reflected – is zero. In these situations, one must divide the reflectance by the probability density function of the sampling scheme, as per Monte Carlo integration (in the naive case above, there is no particular sampling scheme, so the PDF turns out to be \frac{1}{2\pi}). There are other considerations to take into account to ensure conservation of energy. In particular, in the naive case, the reflectance of a diffuse BRDF must not exceed \frac{1}{\pi} or the object will reflect more light than it receives (this however depends on the sampling scheme used, and can be difficult to get right). ==Bidirectional path tracing==
Bidirectional path tracing
Sampling the integral can be done by either of the following two distinct approaches: • Backwards path tracing, where paths are generated starting from the camera and bouncing around the scene until they encounter a light source. This is referred to as "backwards" because starting paths from the camera and moving towards the light source is opposite the direction that the light is actually traveling. It still produces the same result because all optical systems are reversible. • Light tracing (or forwards path tracing), where paths are generated starting from the light sources and bouncing around the scene until they encounter the camera. In both cases, a technique called next event estimation can be used to reduce variance. This works by directly sampling an important feature (the camera in the case of light tracing, or a light source in the case of backwards path tracing) instead of waiting for a path to hit it by chance. This technique is usually effective, but becomes less useful when specular or near-specular BRDFs are present. For backwards path tracing, this creates high variance for caustic paths that interact with a diffuse surface, then bounce off a specular surface before hitting a light source. Next event estimation cannot be used to sample these paths directly from the diffuse surface, because the specular interaction is in the middle. Likewise, it cannot be used to sample paths from the specular surface because there is only one direction that the light can bounce. Light tracing has a similar issue when paths interact with a specular surface before hitting the camera. Because this situation is significantly more common, and noisy (or completely black) glass objects are very visually disruptive, backwards path tracing is the only method that is used for unidirectional path tracing in practice. Bidirectional path tracing provides an algorithm that combines the two approaches and can produce lower variance than either method alone. For each sample, two paths are traced independently: one from the light source and one from the camera. This produces a set of possible sampling strategies, where every vertex of one path can be connected directly to every vertex of the other. The original light tracing and backwards path tracing algorithms are both special cases of these strategies. For light tracing, it is connecting the vertices of the camera path directly to the first vertex of the light path. For backwards path tracing, it is connecting the vertices of the light path to the first vertex of the camera path. In addition, there are several completely new sampling strategies, where intermediate vertices are connected. Weighting all of these sampling strategies using multiple importance sampling creates a new sampler that can converge faster than unidirectional path tracing, even though more work is required for each sample. This works particularly well for caustics or scenes that are lit primarily through indirect lighting. ==Performance==
Performance
A path tracer continuously samples pixels of an image. The image starts to become recognizable after only a few samples per pixel, perhaps 100. However, for the image to "converge" and reduce noise to acceptable levels usually takes around 5000 samples for most images, and many more for pathological cases. Noise is particularly a problem for animations, giving them a normally unwanted "film grain" quality of random speckling. The central performance bottleneck in path tracing is the complex geometrical calculation of casting a ray. Importance sampling is a technique which is motivated to cast fewer rays through the scene while still converging correctly to outgoing luminance on the surface point. This is done by casting more rays in directions in which the luminance would have been greater anyway. If the density of rays cast in certain directions matches the strength of contributions in those directions, the result is identical, but far fewer rays were actually cast. Importance sampling is used to match ray density to Lambert's cosine law, and also used to match BRDFs. Metropolis light transport can result in a lower-noise image with fewer samples. This algorithm was created in order to get faster convergence in scenes in which the light must pass through odd corridors or small holes in order to reach the part of the scene that the camera is viewing. It has also shown promise in correctly rendering pathological situations with caustics. Instead of generating random paths, new sampling paths are created as slight mutations of existing ones. In this sense, the algorithm "remembers" the successful paths from light sources to the camera. ==Scattering distribution functions==
Scattering distribution functions
The reflective properties (amount, direction, and color) of surfaces are modeled using BRDFs. The equivalent for transmitted light (light that goes through the object) are BSDFs. A path tracer can take full advantage of complex, carefully modeled or measured distribution functions, which controls the appearance ("material", "texture", or "shading" in computer graphics terms) of an object. ==Photon mapping==
Photon mapping
Path tracing has difficulty rendering caustics – bright patches of light that appear when a small or highly directional light source (such as the sun) is reflected or refracted by a shiny surface or transparent object before illuminating another surface. The result is usually very noisy, with many "fireflies" (stray bright pixels), or caustics may simply be missing. Bidirectional path tracing can render caustics if light subpaths are allowed to contribute to any pixel in the image (breaking the pixel-by-pixel rendering model), but caustics may still be missing if viewed in a mirror. This type of difficult lighting can also cause noise elsewhere in the image. A different rendering technique called photon mapping or photon density estimation is able to render caustics more effectively. Photon mapping works like the "forward" or "particle tracing" portion of bidirectional path tracing, simulating the paths of many "packets" of light emitted by a light source. These are called photons in the algorithm, but they usually do not actually represent physical photons. Photons are traced through reflections and refractions until they hit a diffuse (matte) surface or leave the scene. Information about photons that hit a surface is accumulated in a data structure called a photon map and used later, rather than being used immediately for rendering a particular pixel. The photon map is implemented using a spatial structure such as a k-d tree or hash grid. Photon mapping is biased, because it uses a density estimation filter to smooth out the data and allow estimating the intensity of light at an arbitrary point on a surface. If an insufficient number of light samples is used, the image may have a blurry or blotchy appearance caused by the filter. Photon mapping is usually combined with some form of ray tracing, allowing precise direct illumination and reflection and refraction of camera rays, while photon maps are used for only for caustics and/or diffuse global illumination (often with separate maps for each, because caustics require denser sampling). One early variation of photon mapping used a photon map to "guide" rays for global illumination (which was rendered using distribution ray tracing), instead of using the radiance information in the map directly. This is similar to path guiding in modern path tracing. Another method explored was the use of data from photon maps as control variates for reducing the variance of path tracing. Both of these approaches can be unbiased. Progressive photon mapping An algorithm called progressive photon mapping is optimized to allow tracing a larger number of photons, with the number not limited by available memory. Tracing more photons can reduce bias. Rather than beginning with photon mapping, it starts with a "camera pass", in which paths are traced from the camera until they hit a diffuse surface. Information about these subpaths (called visible points) is stored in a spatial data structure. In the photon mapping pass, subpaths from the light source are traced until they hit a diffuse surface, and any nearby subpaths from the camera pass are then found. The camera subpaths determine how much light from the light subpath is added to each pixel in the image. Instead of consuming memory by storing photons (light subpaths) in a k-d tree (or other spatial structure), progressive photon mapping stores visible points (camera subpaths), which is still a problem because a path traced image usually requires a large number of camera rays for each pixel. A variation called stochastic progressive photon mapping makes the algorithm more practical by breaking rendering into multiple pairs of camera and light passes, storing typically only a single camera subpath for each pixel at a time. The algorithm also tracks the number of nearby photons found for each pixel's visible points, and reduces the size of the filter kernel in subsequent passes, which allows rendering sharp caustics even though the photons are traced in smaller batches. Unified frameworks One problem with photon mapping approaches is that surfaces must usually be divided into specular (glossy) and diffuse surfaces, with different strategies used for each, and this classification may not always be optimal. Later techniques called vertex connection and merging (VCM) and unified path sampling (UPS) create frameworks that allow using both photon mapping and standard path tracing on any surface, and attempt to reduce bias further. The unified frameworks are forms of bidirectional path tracing (BDPT), based on combining paths generated by different strategies. The same path can potentially be generated by multiple strategies, and so paths must be blended carefully, using multiple importance sampling (MIS) weights. Photon mapping samples are represented as paths where a subpath from the camera and a subpath from a light source have been joined at nearby (but not exactly matching) points on a surface, or as paths where a vertex on a surface has been perturbed using a probability distribution corresponding to the filter kernel that was used to smooth photon density estimation. Because the sampling probabilities of all types of paths can be computed (whether or not they use the photon mapping strategy), they can be blended using MIS weights, as in standard BDPT. The algorithm is still biased but will converge to the correct result faster than other forms of photon mapping as the number of light subpaths increases. Reusing the same light subpath for multiple pixels (as is done in most forms of photon mapping) can cause correlations between nearby pixels, which may produce visible artifacts and cause problems for denoisers. The VCM method reduces this correlation by randomly modifying each light subpath before using it. ==See also==
tickerdossier.comtickerdossier.substack.com