How does the parameter scaleFactor in detectMultiScale affect face detection?
I am trying out a slight variation of the example from http://docs.opencv.org/2.4.4-beta/doc/tutorials/introduction/desktop_java/java_dev_intro.html
CascadeClassifier faceDetector = new CascadeClassifier("/haarcascade_frontalface_default.xml");
Mat image = Highgui.imread(originalFile.getAbsolutePath());
MatOfRect faceDetections = new MatOfRect();
double w = ((double)originalCrop.getWidth());
double h = ((double)originalCrop.getHeight());
faceDetector.detectMultiScale(image, faceDetections, 3, 1,
Objdetect.CASCADE_DO_CANNY_PRUNING , new Size(w/16, h/16), new Size(w/2, h/2));
From the API: scaleFactor – Parameter specifying how much the image size is reduced at each image scale.
Changing the scaleFactor changes what is detected. For example, for the following image: http://graphics8.nytimes.com/images/2013/04/02/world/MOSCOW/MOSCOW-articleLarge-v2.jpg
scaleFactor of 3 --> Gorbachev's face is not detected
scaleFactor of 2 --> Gorbachev's face is detected twice (one larger rectangle containing a smaller one)
scaleFactor of 1.01 ---> Gorbachev's face is detected once
How exactly does this work?