MarketBicubic interpolation
Company Profile

Bicubic interpolation

In mathematics, bicubic interpolation is an extension of cubic spline interpolation for interpolating data points on a two-dimensional regular grid. The interpolated surface is smoother than corresponding surfaces obtained by bilinear interpolation or nearest-neighbor interpolation. Bicubic interpolation can be accomplished using either Lagrange polynomials, cubic splines, or cubic convolution algorithm.

Computation
Image:Interpolation-bicubic.svg|thumb|right|Bicubic interpolation on the square [0,4] \times [0,4] consisting of 25 unit squares patched together. Bicubic interpolation as per Matplotlib's implementation. Colour indicates function value. The black dots are the locations of the prescribed data being interpolated. Note how the color samples are not radially symmetric. on the same dataset as above. Derivatives of the surface are not continuous over the square boundaries. on the same dataset as above Suppose the function values f and the derivatives f_x, f_y and f_{xy} are known at the four corners (0,0), (1,0), (0,1), and (1,1) of the unit square. The interpolated surface can then be written as p(x,y) = \sum\limits_{i=0}^3 \sum_{j=0}^3 a_{ij} x^i y^j. The interpolation problem consists of determining the 16 coefficients a_{ij}. Matching p(x,y) with the function values yields four equations: • f(0,0) = p(0,0) = a_{00}, • f(1,0) = p(1,0) = a_{00} + a_{10} + a_{20} + a_{30}, • f(0,1) = p(0,1) = a_{00} + a_{01} + a_{02} + a_{03}, • f(1,1) = p(1,1) = \textstyle \sum\limits_{i=0}^3 \sum\limits_{j=0}^3 a_{ij}. Likewise, eight equations for the derivatives in the x and the y directions: • f_x(0,0) = p_x(0,0) = a_{10}, • f_x(1,0) = p_x(1,0) = a_{10} + 2a_{20} + 3a_{30}, • f_x(0,1) = p_x(0,1) = a_{10} + a_{11} + a_{12} + a_{13}, • f_x(1,1) = p_x(1,1) = \textstyle \sum\limits_{i=1}^3 \sum\limits_{j=0}^3 a_{ij} i, • f_y(0,0) = p_y(0,0) = a_{01}, • f_y(1,0) = p_y(1,0) = a_{01} + a_{11} + a_{21} + a_{31}, • f_y(0,1) = p_y(0,1) = a_{01} + 2a_{02} + 3a_{03}, • f_y(1,1) = p_y(1,1) = \textstyle \sum\limits_{i=0}^3 \sum\limits_{j=1}^3 a_{ij} j. And four equations for the xy mixed partial derivative: • f_{xy}(0,0) = p_{xy}(0,0) = a_{11}, • f_{xy}(1,0) = p_{xy}(1,0) = a_{11} + 2a_{21} + 3a_{31}, • f_{xy}(0,1) = p_{xy}(0,1) = a_{11} + 2a_{12} + 3a_{13}, • f_{xy}(1,1) = p_{xy}(1,1) = \textstyle \sum\limits_{i=1}^3 \sum\limits_{j=1}^3 a_{ij} i j. The expressions above have used the following identities: p_x(x,y) = \textstyle \sum\limits_{i=1}^3 \sum\limits_{j=0}^3 a_{ij} i x^{i-1} y^j, p_y(x,y) = \textstyle \sum\limits_{i=0}^3 \sum\limits_{j=1}^3 a_{ij} x^i j y^{j-1}, p_{xy}(x,y) = \textstyle \sum\limits_{i=1}^3 \sum\limits_{j=1}^3 a_{ij} i x^{i-1} j y^{j-1}. This procedure yields a surface p(x,y) on the unit square [0,1] \times [0,1] that is continuous and has continuous derivatives. Bicubic interpolation on an arbitrarily sized regular grid can then be accomplished by patching together such bicubic surfaces, ensuring that the derivatives match on the boundaries. Grouping the unknown parameters a_{ij} in a vector \alpha=\left[\begin{smallmatrix}a_{00}&a_{10}&a_{20}&a_{30}&a_{01}&a_{11}&a_{21}&a_{31}&a_{02}&a_{12}&a_{22}&a_{32}&a_{03}&a_{13}&a_{23}&a_{33}\end{smallmatrix}\right]^T and letting x=\left[\begin{smallmatrix}f(0,0)&f(1,0)&f(0,1)&f(1,1)&f_x(0,0)&f_x(1,0)&f_x(0,1)&f_x(1,1)&f_y(0,0)&f_y(1,0)&f_y(0,1)&f_y(1,1)&f_{xy}(0,0)&f_{xy}(1,0)&f_{xy}(0,1)&f_{xy}(1,1)\end{smallmatrix}\right]^T, the above system of equations can be reformulated into a matrix for the linear equation A\alpha=x. Inverting the matrix gives the more useful linear equation A^{-1}x=\alpha, where A^{-1}=\left[\begin{smallmatrix}\begin{array}{rrrrrrrrrrrrrrrr} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ -3 & 3 & 0 & 0 & -2 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 2 & -2 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -3 & 3 & 0 & 0 & -2 & -1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 2 & -2 & 0 & 0 & 1 & 1 & 0 & 0 \\ -3 & 0 & 3 & 0 & 0 & 0 & 0 & 0 & -2 & 0 & -1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & -3 & 0 & 3 & 0 & 0 & 0 & 0 & 0 & -2 & 0 & -1 & 0 \\ 9 & -9 & -9 & 9 & 6 & 3 & -6 & -3 & 6 & -6 & 3 & -3 & 4 & 2 & 2 & 1 \\ -6 & 6 & 6 & -6 & -3 & -3 & 3 & 3 & -4 & 4 & -2 & 2 & -2 & -2 & -1 & -1 \\ 2 & 0 & -2 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 2 & 0 & -2 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 \\ -6 & 6 & 6 & -6 & -4 & -2 & 4 & 2 & -3 & 3 & -3 & 3 & -2 & -1 & -2 & -1 \\ 4 & -4 & -4 & 4 & 2 & 2 & -2 & -2 & 2 & -2 & 2 & -2 & 1 & 1 & 1 & 1 \end{array}\end{smallmatrix}\right], which allows \alpha to be calculated quickly and easily. There can be another concise matrix form for 16 coefficients: \begin{bmatrix} f(0,0)&f(0,1)&f_y (0,0)&f_y (0,1)\\f(1,0)&f(1,1)&f_y (1,0)&f_y (1,1)\\f_x (0,0)&f_x (0,1)&f_{xy} (0,0)&f_{xy} (0,1)\\f_x (1,0)&f_x (1,1)&f_{xy} (1,0)&f_{xy} (1,1) \end{bmatrix} = \begin{bmatrix} 1&0&0&0\\1&1&1&1\\0&1&0&0\\0&1&2&3 \end{bmatrix} \begin{bmatrix} a_{00}&a_{01}&a_{02}&a_{03}\\a_{10}&a_{11}&a_{12}&a_{13}\\a_{20}&a_{21}&a_{22}&a_{23}\\a_{30}&a_{31}&a_{32}&a_{33} \end{bmatrix} \begin{bmatrix} 1&1&0&0\\0&1&1&1\\0&1&0&2\\0&1&0&3 \end{bmatrix}, or \begin{bmatrix} a_{00}&a_{01}&a_{02}&a_{03}\\a_{10}&a_{11}&a_{12}&a_{13}\\a_{20}&a_{21}&a_{22}&a_{23}\\a_{30}&a_{31}&a_{32}&a_{33} \end{bmatrix} = \begin{bmatrix} 1&0&0&0\\0&0&1&0\\-3&3&-2&-1\\2&-2&1&1 \end{bmatrix} \begin{bmatrix} f(0,0)&f(0,1)&f_y (0,0)&f_y (0,1)\\f(1,0)&f(1,1)&f_y (1,0)&f_y (1,1)\\f_x (0,0)&f_x (0,1)&f_{xy} (0,0)&f_{xy} (0,1)\\f_x (1,0)&f_x (1,1)&f_{xy} (1,0)&f_{xy} (1,1) \end{bmatrix} \begin{bmatrix} 1&0&-3&2\\0&0&3&-2\\0&1&-2&1\\0&0&-1&1 \end{bmatrix}, where p(x,y)=\begin{bmatrix}1 &x&x^2&x^3\end{bmatrix} \begin{bmatrix} a_{00}&a_{01}&a_{02}&a_{03}\\a_{10}&a_{11}&a_{12}&a_{13}\\a_{20}&a_{21}&a_{22}&a_{23}\\a_{30}&a_{31}&a_{32}&a_{33} \end{bmatrix} \begin{bmatrix}1\\y\\y^2\\y^3\end{bmatrix}. ==Extension to rectilinear grids==
Extension to rectilinear grids
Often, applications call for bicubic interpolation using data on a rectilinear grid, rather than the unit square. In this case, the identities for p_x, p_y, and p_{xy} become p_x(x,y) = \textstyle \sum\limits_{i=1}^3 \sum\limits_{j=0}^3 \frac{a_{ij} i x^{i-1} y^j}{\Delta x}, p_y(x,y) = \textstyle \sum\limits_{i=0}^3 \sum\limits_{j=1}^3 \frac{a_{ij} x^i j y^{j-1}}{\Delta y}, p_{xy}(x,y) = \textstyle \sum\limits_{i=1}^3 \sum\limits_{j=1}^3 \frac{a_{ij} i x^{i-1} j y^{j-1}}{\Delta x \Delta y}, where \Delta x is the x spacing of the cell containing the point (x,y) and similar for \Delta y. In this case, the most practical approach to computing the coefficients \alpha is to let x=\left[\begin{smallmatrix}f(0,0)&f(1,0)&f(0,1)&f(1,1)&\Delta x f_x(0,0)&\Delta xf_x(1,0)&\Delta x f_x(0,1)&\Delta x f_x(1,1)&\Delta y f_y(0,0)&\Delta y f_y(1,0)&\Delta y f_y(0,1)&\Delta y f_y(1,1)&\Delta x \Delta y f_{xy}(0,0)&\Delta x \Delta y f_{xy}(1,0)&\Delta x \Delta y f_{xy}(0,1)&\Delta x \Delta y f_{xy}(1,1)\end{smallmatrix}\right]^T, then to solve \alpha=A^{-1}x with A as before. Next, the normalized interpolating variables are computed as \begin{align} \overline{x} &= \frac{x-x_0}{x_1-x_0}, \\ \overline{y} &= \frac{y-y_0}{y_1-y_0} \end{align} where x_0, x_1, y_0, and y_1 are the x and y coordinates of the grid points surrounding the point (x,y). Then, the interpolating surface becomes p(x,y) = \sum\limits_{i=0}^3 \sum_{j=0}^3 a_{ij} {\overline{x}}^i {\overline{y}}^j. ==Finding derivatives from function values==
Finding derivatives from function values
If the derivatives are unknown, they are typically approximated from the function values at points neighbouring the corners of the unit square, e.g. using finite differences. To find either of the single derivatives, f_x or f_y, using that method, find the slope between the two surrounding points in the appropriate axis. For example, to calculate f_x for one of the points, find f(x,y) for the points to the left and right of the target point and calculate the slope of the straight line between them, and similarly for f_y. To find the cross derivative f_{xy}, take the derivative in both axes, one at a time. For example, one can first use the f_x procedure to find the x derivatives of the points above and below the target point, then use the f_y procedure on those values (rather than, as usual, the values of f for those points) to obtain the value of f_{xy}(x,y) for the target point. (Or one can do it in the opposite direction, first calculating f_y and then f_x from those. The two give equivalent results.) At the edges of the dataset, when one is missing some of the surrounding points, the missing points can be approximated by a number of methods. A simple and common method is to assume that the slope from the existing point to the target point continues without further change, and using this to calculate a hypothetical value for the missing point. ==Bicubic convolution algorithm==
Bicubic convolution algorithm
Bicubic spline interpolation requires the solution of the linear system described above for each grid cell. An interpolator with similar properties can be obtained by applying a convolution with the following kernel in both dimensions: W(x) = \begin{cases} (a+2)|x|^3-(a+3)|x|^2+1 & \text{for } |x| \leq 1, \\ a|x|^3-5a|x|^2+8a|x|-4a & \text{for } 1 where a is usually set to −0.5 or −0.75. Note that W(0)=1 and W(n)=0 for all nonzero integers n. This approach was proposed by Rifman of TRW Systems Group, targeting image data from ERTS (Earth Resources Technology Satellite, later Landsat 1). Initially, a was fixed at -1, but later it was parameterized by Simon of the same company, making -0.5, -0.75, and -1.0 meaningful values. However, these proposals did not sufficiently present the derivation process or formulas, so Keys later re-proposed it in a complete form, demonstrating that with a=-0.5, cubic convergence is achieved regarding the sampling interval of the original function. The convolution kernel W(x) is derived as follows: W(x) = \begin{cases} A_1|x|^3+B_1|x|^2+C_1|x|+D_1 & \text{for } |x| \leq 1 \\ A_2|x|^3+B_2|x|^2+C_2|x|+D_2 & \text{for } 1 With the conditions: W(0)=1, W(1)=W(2)=0, also for first derivative: W'(0)=W'(2)=0, W'(1)=a, we get; • W(0)=D_1=1, • W(1^-)=A_1+B_1+C_1+D_1=0, • W(1^+)=A_2+B_2+C_2+D_2=0, • W(2^-)=8A_2+4B_2+2C_2+D_2=0, • W'(0)=C_1=0, • W'(1^-)=3A_1+2B_1+C_1=a, • W'(1^+)=3A_2+2B_2+C_2=a, • W'(2^-)=12A_2+4B_2+C_2=0. Solving the above simultaneous equations gives us; A_1=a+2, \quad B_1=-(a+3), \quad C_1=0, \quad D_1=1, A_2=a, \quad B_2=-5a, \quad C_2=8a, \quad D_2=-4a, and the above equation. Matrix notation If we use the matrix notation, we can express the equation in a more friendly manner: p(t) = \begin{bmatrix} 1 & t & t^2 & t^3 \end{bmatrix} \begin{bmatrix} 0 & 1 & 0 & 0 \\ a & 0 & -a & 0 \\ -2a & -a-3 & 2a+3 & a \\ a & a+2 & -a-2 & -a \end{bmatrix} \begin{bmatrix} f_{-1} \\ f_0 \\ f_1 \\ f_2 \end{bmatrix} for t between 0 and 1 for one dimension. Note that for 1-dimensional cubic convolution interpolation 4 sample points are required. For each inquiry two samples are located on its left and two samples on the right. These points are indexed from −1 to 2 in this text. The distance from the point indexed with 0 to the inquiry point is denoted by t here. Since a=-0.5, it coincides with the Catmull-Rom spline For two dimensions first applied once in x and again in y: \begin{align} b_{-1}&= p(t_x, f_{(-1,-1)}, f_{(0,-1)}, f_{(1,-1)}, f_{(2,-1)}), \\[1ex] b_{0} &= p(t_x, f_{(-1,0)}, f_{(0,0)}, f_{(1,0)}, f_{(2,0)}), \\[1ex] b_{1} &= p(t_x, f_{(-1,1)}, f_{(0,1)}, f_{(1,1)}, f_{(2,1)}), \\[1ex] b_{2} &= p(t_x, f_{(-1,2)}, f_{(0,2)}, f_{(1,2)}, f_{(2,2)}), \end{align} p(x,y) = p(t_y, b_{-1}, b_{0}, b_{1}, b_{2}). Differential Continuity It is continuous in the first derivative by definition, and can be easily verified using the following matrix notation. : p_i'(1) = \begin{bmatrix} 0 & a & 0 & -a \end{bmatrix} \begin{bmatrix} f_{i-1} \\ f_i \\ f_{i+1} \\ f_{i+2} \end{bmatrix} = p_{i+1}'(0) = \begin{bmatrix} a & 0 & -a & 0 \end{bmatrix} \begin{bmatrix} f_i \\ f_{i+1} \\ f_{i+2} \\ f_{i+3} \end{bmatrix} Hence, it is continuous in the first derivative. : p_i''(1) = \begin{bmatrix} 2a & 4a+6 & -2a-6 & -4a \end{bmatrix} \begin{bmatrix} f_{i-1} \\ f_i \\ f_{i+1} \\ f_{i+2} \end{bmatrix} \neq p_{i+1}''(0) = \begin{bmatrix} -4a & -2a-6 & 4a+6 & 2a \end{bmatrix} \begin{bmatrix} f_i \\ f_{i+1} \\ f_{i+2} \\ f_{i+3} \end{bmatrix} Hence, it is not continuous in the second derivative. Comparison with other methods Below is a comparison of the kernel functions for each method: ==Use in computer graphics==
Use in computer graphics
The bicubic algorithm is frequently used for scaling images and video for display (see bitmap resampling). It preserves fine detail better than the common bilinear algorithm. Bicubic algorithm is also used for image downscaling, but because resampling during downscaling differs from the effect of zooming out in real optics, other methods may be more suitable. Effect of parameter "a" . However, due to the negative lobes on the kernel, it causes overshoot (haloing). This can cause clipping, and is an artifact (see also ringing artifacts), but it increases acutance (apparent sharpness), and can be desirable. This effect is larger when the parameter a is -0.75 or -1.0 than when it is -0.5. When a=-0.5, the interpolation result matches the Taylor approximation up to the second derivative (excluding the remainder term) of the original image (the continuous function before sampling). A clear example of this effect is when the original image is a linear function (a simple gradient image), and the interpolated result is a linear function only when a=-0.5. For example, if the input is \begin{bmatrix} f_{i-1} & f_i & f_{i+1} & f_{i+2} \end{bmatrix} = \begin{bmatrix} -1 & 0 & 1 & 2 \end{bmatrix}, the interpolated result is p(t) = -2(2a+1)t^3+3(2a+1)t^2-2at, and it is a linear function (quadratic and cubic terms are 0) only when a=-0.5. Below are a simple gradient image enlarged 16x, and a portion of the standard test image "cameraman" enlarged 8x. The vertical stripes in the former and the artificial mottled pattern around the temples in the latter are artifacts caused by the above-mentioned effect. Note that when a=-0.75, the second derivative of the convolution kernel is continuous at x=1 Extrapolation Two pixels of the input image are required on the left and right (or top and bottom) for interpolation. Since the outer edges of the image are invalid, extrapolation is required. The extrapolation method varies depending on the implementation. The following is an example of copying and extrapolating the outermost pixels. ==See also==
tickerdossier.comtickerdossier.substack.com