Color histogram for object recognition
Hello,
I would like to recognise fruit using color histogram, and svm. But whatever i am doing, the result of the SVM is false. Does someone can give me link to use color histogram for object recognition please or just give some clues?
Actually, i am dividing an image into 12*12 blocks, and then extract rgb color histogram from it. I use this result for my SVM.
When i test the program, all the result seems bad, i think i am doing something wrong.
12x12=144 * 256 * 3 ... i guess, you'll end up with a quite large feature vector this way.
Hi again :) So, it is not color histogram problem ? If you jcan ust tell me if i am right please, I divide the image into grids, in each grids, i use calcHist() to extract red, greenblue histogram, i made the average to construct my histogram ==> hit (1) = histRed.at(1) + histGreen.at(1) + histRed.at(1) I repeat this until i've done all the grids, and after that, i use the matrix for the svm
so, how much train data is there ? how many classes ? what are your params ?
imho, your base idea does not sound too bad. (though, i'd rather go for HSV, and take hists of H and S only, and maybe less than 256 bins, and less than 12x12 patches)
My params : params.kernel_type = CvSVM::RBF; params.svm_type = CvSVM::C_SVC; params.gamma = 0.50625000000000009; params.C = 312.50000000000000;
data : 60 images
class : 4
4 classes in only 60 images? Forget it :D Lets multiply it by a factor of 10 at least!
60 images per class ^^ But ya i think this is not enough But about the parameter of the SVM, i read that cross validation is a good way to find the best param, but what are the interval for the parameter ? Because, if i have to test different parameter, i should have a interval i think .. Does opencv train_auto function use the best parameter ?
60 images per class are a good first start, I have seen studies with much less (especially in the field of medical image processing). Try first a linear kernel and vary the C parameter 10^-3 - 10^7. And yes, cross-validation is the way to go, train_auto can do that for you in a nice way. If that works then to some degree, you can start to try out other kernels like the RBF kernel.
So i try the train auto to have the best parameter, but i got another issue .. When i am testing my program with two class, it works fine, for example, i train the svm with banana and apple, and whent i test with a picture of a banana, it works, but when i add a class to my training data, and use the one vs all strategy, it can't detect that it is a banana ..
This is my parameter for the train_auto :
Your second problem (with adding a third class) is because your SVM has no idea on what the third class should be. In order to do a one vs all strategy you will need to use
This will result in 3 SVM's that are able to seperate your data using the one vs all strategy.