1 | initial version |
I'm not an expert in image segmentation techniques, but due to noboby has answered it, here are my two cents:
One simple approach is to perform an euclidean distance segmentation between the points. For example: RGB(10, 10, 10) is closer from RGB(9, 9, 9) than RGB(1, 1, 10) But one key point to take into account is the correlation in your variables. gray pixels are correlated because a gray value has all the RGB components very similar. That is why the mahalanobis distance is often used in image segmentation. mahalanobis distance differs from Euclidean distance in that it takes into account the correlations of the data set.
In order to use the Mahalanobis distance to classify a test point as belonging to one of N classes, one first estimates the covariance matrix of each class, usually based on samples known to belong to each class. Then, given a test sample, one computes the Mahalanobis distance to each class, and classifies the test point as belonging to that class for which the Mahalanobis distance is minimal.
This is an example that performs image segmentation using the mahalanobis distance:
mahalanobis distance segmentation
First, you use the mouse to select pixels which act as the "training set" --> build the covariance matrix. Then, the mahalanobis distance is used to segment your images.
The mahalanobis distance is also used in background substraction (discriminate between foreground and background pixels by building and maintaining a model of the background). You could also try to build a model of the background (dark-blue background) and try to segment the foreground (dark-grey text).