Functioning as a post-processing filter to a labelled image This approach is very effective against small regions caused by noise. And these small regions are usually formed by few pixels or one pixel. The most probable label is assigned to these regions. However, there is a drawback of this method. The small regions also can be formed by correct regions rather than noise, and in this case the method is actually making the classification worse. This approach is widely used in
remote sensing applications.
Improving the post-processing classification This is a two-stage classification process: • For each pixel, label the pixel and form a new
feature vector for it. • Use the new feature vector and combine the contextual information to assign the final label to the
Merging the pixels in earlier stages Instead of using single pixels, the neighbour pixels can be merged into homogeneous regions benefiting from contextual information. And provide these regions to classifier.
Acquiring pixel feature from neighbourhood The original spectral data can be enriched by adding the contextual information carried by the neighbour pixels, or even replaced in some occasions. This kind of pre-processing methods are widely used in
textured image recognition. The typical approaches include mean values, variances, texture description, etc.
Combining spectral and spatial information The classifier uses the grey level and pixel neighbourhood (contextual information) to assign labels to pixels. In such case the information is a combination of spectral and spatial information.
Powered by the Bayes minimum error classifier Contextual classification of image data is based on the Bayes minimum error classifier (also known as a
naive Bayes classifier).
Present the pixel: • A pixel is denoted as x_0. • The neighbourhood of each pixel x_0 is a vector and denoted as N(x_0). • The values in the neighbourhood vector is denoted as f(x_i). • Each pixel is presented by the vector :::\xi = \left ( f(x_0), f(x_1), \ldots, f(x_k) \right ) :::x_i \in N(x_0); \quad i = 1, \ldots, k • The labels (classification) of pixels in the neighbourhood N(x_0) are presented as a vector ::\eta = \left ( \theta_0, \theta_1, \ldots, \theta_k \right ) ::\theta_i \in \left \{ \omega_0, \omega_1, \ldots, \omega_k \right \} ::\omega_s here denotes the assigned class. • A vector presents the labels in the neighbourhood N(x_0) without the pixel x_0 ::\hat \eta = \left ( \theta_1, \theta_2, \ldots, \theta_k \right )
The neighbourhood: Size of the neighbourhood. There is no limitation of the size, but it is considered to be relatively small for each pixel x_0. A reasonable size of neighbourhood would be 3 \times 3 of 4-
connectivity or 8-connectivity (x_0 is marked as red and placed in the centre). Image:Square_4_connectivity.svg|
4-connectivity neighbourhood, |alt=4-connectivity neighbourhood, Image:Square_8_connectivity.svg|
8-connectivity neighbourhood
The calculation: Apply the minimum error classification on a pixel x_0, if the probability of a class \omega_r being presenting the pixel x_0 is the highest among all, then assign \omega_r as its class. : \theta_0 = \omega_r \quad\text{ if }\quad P(\omega_r\mid f(x_0)) = \max_{s=1,2,\ldots,R} P(\omega_s\mid f(x_0)) The contextual classification rule is described as below, it uses the feature vector x_1 rather than x_0. : \theta_0 = \omega_r \quad\text{ if }\quad P(\omega_r\mid\xi) = \max_{s=1,2,\ldots,R} P(\omega_s\mid\xi) Use the Bayes formula to calculate the posteriori probability P(\omega_s\mid\xi) : P(\omega_s\mid\xi) = \frac{p(\xi\mid\omega_s)P(\omega_s)}{p \left ( \xi \right )} The number of vectors is the same as the number of pixels in the image. For the classifier uses a vector corresponding to each pixel x_i, and the vector is generated from the pixel's neighbourhood.
The basic steps of contextual image classification: • Calculate the feature vector \xi for each pixel. • Calculate the parameters of probability distribution p ( \xi\mid\omega_s ) and P ( \omega_s ) • Calculate the posterior probabilities P(\omega_r\mid\xi) and all labels \theta_0. Get the image classification result. == Algorithms ==