These are my assumptions:
- createsamples tool takes the specified images, for each of them it scales and rotate it as specified by the provided parameters
- the trainingcascade tool uses these positive samples putting them over negative samples (that are so used as background) in many ways (rotating, scaling and putting them in different positions of the backgrounds
- the training algorithm assumes that positive samples are the objects to be recognized and the backgrounds are used to distinguish figures from backgrounds
These are my questions (actually one answer could be enough for both):
- wouldn't be much better to provide positive samples without any background (or with a easily recognizable BG, say green as for chroma keying in cinema)?
- Doesn't positive samples with non null backgrounds confuse the training algorithm?
Thanx guys for all your work, it's great!