The image stitching process can be divided into three main components:
image registration,
calibration, and
blending.
Image stitching algorithms In order to estimate image alignment, algorithms are needed to determine the appropriate mathematical model relating pixel coordinates in one image to pixel coordinates in another. Algorithms that combine direct pixel-to-pixel comparisons with gradient descent (and other optimization techniques) can be used to estimate these parameters. Distinctive features can be found in each image and then efficiently matched to rapidly establish
correspondences between pairs of images. When multiple images exist in a panorama, techniques have been developed to compute a globally consistent set of alignments and to efficiently discover which images overlap one another. A final compositing surface onto which to warp or projectively transform and place all of the aligned images is needed, as are algorithms to seamlessly blend the overlapping images, even in the presence of parallax, lens distortion, scene motion, and exposure differences.
Image stitching issues Since the illumination in two views cannot be guaranteed to be identical, stitching two images could create a visible seam. Other reasons for seams could be the background changing between two images for the same continuous foreground. Other major issues to deal with are the presence of
parallax,
lens distortion, scene
motion, and
exposure differences. In a non-ideal real-life case, the intensity varies across the whole scene, and so does the contrast and intensity across frames. Additionally, the
aspect ratio of a panorama image needs to be taken into account to create a visually pleasing
composite. For
panoramic stitching, the ideal set of images will have a reasonable amount of overlap (at least 15–30%) to overcome lens distortion and have enough detectable features. The set of images will have consistent exposure between frames to minimize the probability of seams occurring.
Keypoint detection Feature detection is necessary to automatically find correspondences between images. Robust correspondences are required in order to estimate the necessary transformation to align an image with the image it is being composited on. Corners, blobs,
Harris corners, and
differences of Gaussians of Harris corners are good features since they are repeatable and distinct. One of the first operators for interest point detection was developed by
Hans Moravec in 1977 for his research involving the automatic navigation of a robot through a clustered environment. Moravec also defined the concept of "points of interest" in an image and concluded these interest points could be used to find matching regions in different images. The Moravec operator is considered to be a corner detector because it defines interest points as points where there are large intensity variations in all directions. This often is the case at corners. However, Moravec was not specifically interested in finding corners, just distinct regions in an image that could be used to register consecutive image frames. Harris and Stephens improved upon Moravec's corner detector by considering the differential of the corner score with respect to direction directly. They needed it as a processing step to build interpretations of a robot's environment based on image sequences. Like Moravec, they needed a method to match corresponding points in consecutive image frames, but were interested in tracking both corners and edges between frames.
SIFT and
SURF are recent key-point or interest point detector algorithms but a point to note is that SURF is patented and its commercial usage restricted. Once a feature has been detected, a descriptor method like SIFT descriptor can be applied to later match them.
Registration Image registration involves
matching features in a set of images or using direct alignment methods to search for image alignments that minimize the
sum of absolute differences between overlapping pixels. When using direct alignment methods one might first calibrate one's images to get better results. Additionally, users may input a rough model of the panorama to help the feature matching stage, so that e.g. only neighboring images are searched for matching features. Since there are smaller group of features for matching, the result of the search is more accurate and execution of the comparison is faster. To estimate a robust model from the data, a common method used is known as
RANSAC. The name RANSAC is an abbreviation for "
RANdom
SAmple
Consensus". It is an iterative method for robust parameter estimation to fit mathematical models from sets of observed data points which may contain outliers. The algorithm is non-deterministic in the sense that it produces a reasonable result only with a certain probability, with this probability increasing as more iterations are performed. It being a probabilistic method means that different results will be obtained for every time the algorithm is run. The RANSAC algorithm has found many applications in computer vision, including the simultaneous solving of the correspondence problem and the estimation of the fundamental matrix related to a pair of stereo cameras. The basic assumption of the method is that the data consists of "inliers", i.e., data whose distribution can be explained by some mathematical model, and "outliers" which are data that do not fit the model. Outliers are considered points which come from noise, erroneous measurements, or simply incorrect data. For the problem of
homography estimation, RANSAC works by trying to fit several models using some of the point pairs and then checking if the models were able to relate most of the points. The best model – the homography, which produces the highest number of correct matches – is then chosen as the answer for the problem; thus, if the ratio of number of outliers to data points is very low, the RANSAC outputs a decent model fitting the data.
Calibration Image calibration aims to minimize differences between an ideal lens models and the camera-lens combination that was used, optical defects such as
distortions,
exposure differences between images,
vignetting,{{cite web
Alignment Alignment may be necessary to transform an image to match the view point of the image it is being composited with. Alignment, in simple terms, is a change in the coordinates system so that it adopts a new coordinate system which outputs image matching the required viewpoint. The types of transformations an image may go through are pure translation, pure rotation, a similarity transform which includes translation, rotation and scaling of the image which needs to be transformed,
Affine or projective transform. Projective transformation is the farthest an image can transform (in the set of two dimensional planar transformations), where only visible features that are preserved in the transformed image are straight lines whereas parallelism is maintained in an affine transform. Projective transformation can be mathematically described as x'=H\cdot x, where x is points in the old coordinate system, x' is the corresponding points in the transformed image and H is the
homography matrix. Expressing the points x and x' using the camera intrinsics (K and K') and its rotation and translation [R\, t] to the real-world coordinates X and x and x'', we get Using the above two equations and the homography relation between x and x', we can derive H=K'\cdot R'\cdot R^{-1}\cdot K^{-1}. The homography matrix H has 8 parameters or degrees of freedom. The homography can be computed using Direct Linear Transform and Singular value decomposition with A\cdot h=0, where A is the matrix constructed using the coordinates of correspondences and h is the one dimensional vector of the 9 elements of the reshaped homography matrix. To get to h we can simple apply SVD: A=U\cdot S\cdot V^{\mathrm{T}}. And h=V (column corresponding to the smallest singular vector). This is true since h lies in the null space of A. Since we have 8 degrees of freedom the algorithm requires at least four point correspondences. In case when RANSAC is used to estimate the homography and multiple correspondences are available the correct homography matrix is the one with the maximum number of inliers.
Compositing Compositing is the process where the rectified images are aligned in such a way that they appear as a single shot of a scene. Compositing can be automatically done since the algorithm now knows which correspondences overlap.
Blending Image blending involves executing the adjustments figured out in the calibration stage, combined with remapping of the images to an output projection. Colors are
adjusted between images to compensate for exposure differences. If applicable,
high dynamic range merging is done along with
motion compensation and deghosting. Images are blended together and seam line adjustment is done to minimize the visibility of seams between images. The seam can be reduced by a simple gain adjustment. This compensation is basically minimizing intensity difference of overlapping pixels. Image blending algorithm allots more weight to pixels near the center of the image. Gain compensated and multi band blended images compare the best. IJCV 2007. Straightening is another method to rectify the image. Matthew Brown and David G. Lowe in their paper ‘Automatic Panoramic Image Stitching using Invariant Features’ describe methods of straightening which apply a global rotation such that vector u is vertical (in the rendering frame) which effectively removes the wavy effect from output panoramas. This process is similar to
image rectification, and more generally
software correction of optical distortions in single photographs. Even after gain compensation, some image edges are still visible due to a number of unmodelled effects, such as vignetting (intensity decreases towards the edge of the image), parallax effects due to unwanted motion of the optical centre, mis-registration errors due to mismodelling of the camera, radial distortion and so on. Due to these reasons they propose a blending strategy called multi band blending. ==Projective layouts==