Technique to introduce normalisation/consistency to std dev comparison?
I am implementing a very simple segmentation algorithm for single channel images. The algorithm works like so:
For a single channel image:
- Calculate the standard deviation, ie, measure how much the luminosity varies across the image.
If the stddev > 15 (aka threshold):
- Divide the image into 4 cells/images
- For each cell:
- Repeat step 1 and step 2 (go recursive)
Else:
- Draw a rectangle on the source image to signify a segment lies in these bounds.
My problem occurs because my threshold is constant and when I go recursive 15
is not longer a good signifier of whether that image is homogeneous or not. How can I introduce consistency/normalisation to my homogeneity check?
Should I resize each image to the same size (100x100)? Should my threshold be formula? Say 15 / img.rows * img.cols
or 15 / MAX_HISTOGRAM_PEAK
?
May be method is not good : read “GrabCut” — Interactive Foreground Extraction using Iterated Graph Cuts
15 is for a window of N pixels It means 15+/- s/sqrt(N). Accuracy for threshold s/sqrt(Number of pixels)