1 | initial version |
I'm not an expert in classification methods, but maybe I can offer you some help. and someone can latter complement my answer.
1- Haar features are often used with AdaBoost because they can be seen as weak classifiers and what the Adaboost method actually does is to combine several weak classifiers in order to generate a robust one.
2 - I think that if you concatenate the response of several haar filters and use this big descriptor you can perform classification using SVM (or other methods). However this is not ideal because Adaboost tend to be quicker than SVM (see 3).
3 - I think you can find several papers that compare classifiers (google it). Some classifiers perform better in different classification problems. This depends on your application. For instance, SVM tries to find an hyperplane that divides your positive and negative samples by maximizing the margin between this two classes. Adaboost is good when you have a large number of negative sample at testing time. In this case, each weak classifier can classify as negative your answer such that you don't need to test all the classifiers (this is known as cascade). In general cases this cascade of features is much faster then other approaches. A classical application of Adaboost in Computer Vision is the Viola & Jones method of face and pedestrian detection:
Viola, Jones - Rapid Object Detection using a Boosted Cascade of Simple Features
I hope this helps, if anyone see some errors in the above, please tell us.