Positive samples on transparent background for trainingcascade
These are my assumptions:
- createsamples tool takes the specified images, for each of them it scales and rotate it as specified by the provided parameters
- the trainingcascade tool uses these positive samples putting them over negative samples (that are so used as background) in many ways (rotating, scaling and putting them in different positions of the backgrounds
- the training algorithm assumes that positive samples are the objects to be recognized and the backgrounds are used to distinguish figures from backgrounds
These are my questions (actually one answer could be enough for both):
- wouldn't be much better to provide positive samples without any background (or with a easily recognizable BG, say green as for chroma keying in cinema)?
- Doesn't positive samples with non null backgrounds confuse the training algorithm?
Thanx guys for all your work, it's great!
Actually, if you want to make an object model that actually performs well in hard circumstances, then just quit using the sample generation process all together. You will be training the most artifical training samples possible making your model detecting rubbish. It is better to simply grab enough real training samples and use those to train your detector.