Projecting a narrow band of light onto a three-dimensional surface creates a line of illumination that appears distorted when viewed from perspectives other than that of the projector. This distortion can be analyzed to reconstruct the geometry of the surface, a technique known as light sectioning. Projecting patterns composed of multiple stripes or arbitrary fringes simultaneously enables the acquisition of numerous
data points at once, improving scanning speed. While various
structured light projection techniques exist, parallel stripe patterns are among the most commonly used. By analyzing the displacement of these stripes, the three-dimensional coordinates of surface details can be accurately determined.
Generation of light patterns Two major methods of stripe pattern generation have been established: Laser interference and projection. The laser
interference method works with two wide planar
laser beam fronts. Their
interference results in regular, equidistant line patterns. Different pattern sizes can be obtained by changing the angle between these beams. The method allows for the exact and easy generation of very fine patterns with unlimited depth of field. Disadvantages are high cost of implementation, difficulties providing the ideal beam geometry, and laser typical effects like
speckle noise and the possible self interference with beam parts reflected from objects. Typically, there is no means of modulating individual stripes, such as with Gray codes. The
projection method uses incoherent light and basically works like a
video projector. Patterns are usually generated by passing light through a digital
spatial light modulator, typically based on one of the three currently most widespread digital projection technologies, transmissive
liquid crystal, reflective
liquid crystal on silicon (LCOS) or
digital light processing (DLP; moving micro mirror) modulators, which have various comparative advantages and disadvantages for this application. Other methods of projection could be and have been used, however. Patterns generated by digital display projectors have small discontinuities due to the
pixel boundaries in the displays. Sufficiently small boundaries however can practically be neglected as they are evened out by the slightest defocus. A typical measuring assembly consists of one projector and at least one camera. For many applications, two cameras on opposite sides of the projector have been established as useful.
Invisible (or
imperceptible) structured light uses structured light without interfering with other computer vision tasks for which the projected pattern will be confusing. Example methods include the use of infrared light or of extremely high framerates alternating between two exact opposite patterns.
Calibration Geometric distortions by optics and perspective must be compensated by a
calibration of the measuring equipment, using special calibration patterns and surfaces. A mathematical model is used for describing the imaging properties of projector and cameras. Essentially based on the simple geometric properties of a
pinhole camera, the model also has to take into account the geometric distortions and
optical aberration of projector and camera lenses. The parameters of the camera as well as its orientation in space can be determined by a series of calibration measurements, using
photogrammetric bundle adjustment.
Analysis of stripe patterns There are several depth cues contained in the observed stripe patterns. The displacement of any single stripe can directly be converted into 3D coordinates. For this purpose, the individual stripe has to be identified, which can for example be accomplished by tracing or counting stripes (pattern recognition method). Another common method projects alternating stripe patterns, resulting in binary
Gray code sequences identifying the number of each individual stripe hitting the object. An important depth cue also results from the varying stripe widths along the object surface. Stripe width is a function of the steepness of a surface part, i.e. the first
derivative of the elevation. Stripe frequency and phase deliver similar cues and can be analyzed by a
Fourier transform. Finally, the
wavelet transform has recently been discussed for the same purpose. In many practical implementations, series of measurements combining pattern recognition, Gray codes and Fourier transform are obtained for a complete and unambiguous reconstruction of shapes. Another method also belonging to the area of fringe projection has been demonstrated, utilizing the depth of field of the camera. It is also possible to use projected patterns primarily as a means of structure insertion into scenes, for an essentially
photogrammetric acquisition.
Precision and range The optical resolution of fringe projection methods depends on the width of the stripes used and their optical quality. It is also limited by the wavelength of light. An extreme reduction of stripe width proves inefficient due to limitations in depth of field, camera resolution and display resolution. Therefore, the phase shift method has been widely established: A number of at least 3, typically about 10 exposures are taken with slightly shifted stripes. The first theoretical deductions of this method relied on stripes with a sine wave shaped intensity modulation, but the methods work with "rectangular" modulated stripes, as delivered from LCD or DLP displays as well. By phase shifting, surface detail of e.g. 1/10 the stripe pitch can be resolved. Current optical stripe pattern profilometry hence allows for detail resolutions down to the wavelength of light, below 1 micrometer in practice or, with larger stripe patterns, to approx. 1/10 of the stripe width. Concerning level accuracy, interpolating over several pixels of the acquired camera image can yield a reliable height resolution and also accuracy, down to 1/50 pixel. Arbitrarily large objects can be measured with accordingly large stripe patterns and setups. Practical applications are documented involving objects several meters in size. Typical accuracy figures are: • Planarity of a wide surface, to . • Shape of a motor
combustion chamber to (elevation), yielding a volume accuracy 10 times better than with volumetric dosing. • Shape of an object large, to about • Radius of a blade edge of e.g. , to ±0.4 μm
Navigation As the method can measure shapes from only one perspective at a time, complete 3D shapes have to be combined from different measurements in different angles. This can be accomplished by attaching marker points to the object and combining perspectives afterwards by matching these markers. The process can be automated, by mounting the object on a motorized turntable on robotic inspection cell, or
CNC positioning device. Markers can as well be applied on a positioning device instead of the object itself. The 3D data gathered can be used to retrieve
CAD (computer aided design) data and models from existing components (
reverse engineering), hand formed samples or sculptures, natural objects or
artifacts.
Challenges As with all optical methods, reflective or transparent surfaces raise difficulties. Reflections cause light to be reflected either away from the camera or right into its optics. In both cases, the dynamic range of the camera can be exceeded. Transparent or semi-transparent surfaces also cause major difficulties. In these cases, coating the surfaces with a thin opaque lacquer just for measuring purposes is a common practice. A recent method handles highly reflective and specular objects by inserting a 1-dimensional diffuser between the light source (e.g., projector) and the object to be scanned. Alternative optical techniques have been proposed for handling perfectly transparent and specular objects. Double reflections and inter-reflections can cause the stripe pattern to be overlaid with unwanted light, entirely eliminating the chance for proper detection. Reflective cavities and concave objects are therefore difficult to handle. It is also hard to handle translucent materials, such as skin, marble, wax, plants and human tissue because of the phenomenon of sub-surface scattering. Recently, there has been an effort in the computer vision community to handle such optically complex scenes by re-designing the illumination patterns. These methods have shown promising 3D scanning results for traditionally difficult objects, such as highly specular metal concavities and translucent wax candles.
Speed Although several patterns have to be taken per picture in most structured light variants, high-speed implementations are available for a number of applications, for example: • Inline precision inspection of components during the production process. • Health care applications, such as live measuring of
human body shapes or the micro structures of human skin. Motion picture applications have been proposed, for example the acquisition of spatial scene data for three-dimensional television. == Applications ==