Ask Your Question
7

training GPU HOGDescriptor for multi scale detection

asked 2012-11-19 15:23:21 -0600

msamples gravatar image

Hi, I'm trying to train and use the gpu-accelerated HOG detector, and am using the current git version. Something's not working, and WAY to many positives are generated... and maybe someone can point out my mistake, or a tutorial for the current version (i.e., using getDescriptors rather than "compute"). Here's what I've tried so far:

1) Separated my training images, positive & negative, resized them to 32x32, converted to BGRA

2) Created gpu::HOGDescriptor using default parameters

HOGDescriptor(Size win_size=Size(32, 32), Size block_size=Size(16, 16),
              Size block_stride=Size(8, 8), Size cell_size=Size(8, 8),
              int nbins=9, double win_sigma=DEFAULT_WIN_SIGMA,
              double threshold_L2hys=0.2, bool gamma_correction=true,
              int nlevels=DEFAULT_NLEVELS);

3) Each image is sent to GPU, and descriptor matrix is calculated using getDescriptors

hog.getDescriptors(gpu_image, cv::Size(8,8), d_descriptors);

4) Since my window size is the same as my image size, I only get 1 row in my descriptor Mat, which I then write out to file.

5) These get stored in the svmlight / libsvm standard way: line entries beginning with +1/-1 followed by idx:val sequences for the entries in the descriptor Mat

6) I've trained using both svmlight & libsvm, using a linear kernel, and testing both as classification and regression, just to see if I'm screwing things up. The svm appears to be well-trained, in that I get seemingly ok loss from svm, and when I run on my original data, I get really nice separation of the classes. I convert the generated support vectors into a single vector (w[i] = sum_j SV_j[i] * a[j]), and then the I test the vector against the training input to make sure things pass a sanity check. Regression for the true positives is generally at/above +1, and values for true negatives are generally below -1 (great!)

7) I then take the svm vector w and the distance term from my svm, and push them all into a std::vector with the bias term last. The vector is now #sv + 1 in length (for me, 325). I use

hog.setSVMDetector(detector);

8) Now I put in images from my test sequence... upload img to GPU, call with reasonable parameters:

hog.detectMultiScale(gpu_img, found, hit_threshold=1.4, win_stride=(8,8), Size(0, 0), scale=1.05, gr_threshold=8);
for (size_t i = 0; i < found.size(); i++) {
  Rect r = found[i];
  rectangle(image, r.tl(), r.br(), CV_RGB(0, 255, 0), 3);
}

The problem now is that I get an unreasonable number of false positives.... the whole cv::Mat is painted with green boxes. I expect to get some number of false positives (which I'll then resolve through re-training), but it seems that every sub-window of correct size and stride has been tagged by detectMultiScale. It seems that I'm using something incorrectly, but I'm not sure where.

Biggest unknown for me is what opencv is expecting for format of svm in gpu::setSVMDetector.

I've ... (more)

edit retag flag offensive close merge delete

Comments

Could you expand on step #6 and #7 and how you did that in the code (meaning how you combined the svmlight model vectors into a single vector)?

suneelb gravatar imagesuneelb ( 2015-07-17 13:07:43 -0600 )edit

1 answer

Sort by ยป oldest newest most voted
3

answered 2012-11-25 10:54:44 -0600

msamples gravatar image

Out of desperation, I tried training & running on only grayscale images. For reasons that are currently unknown to me, this worked. By my best understanding, gpu::HogDescriptor should work fine on BGRA images, so I'm not certain why it didn't work in this case, but I also haven't explored it thoroughly yet.

So, for anyone who finds this via google in future, here's my best practice right now:

0) try grayscale images

1) getDescriptors works fine for generating large +/- corpus, default param descr_format works

2) train svm using "libsvm -s 0 -t 0", combine to form single 'w' vector with bias/rho term

3) input as vector<float> to HogDescriptor with bias/rho at end is fine

4) as many others have said, expect false positives and bootstrapping from the results

5) Tweak the detection threshold parameter to reduce detection count... this can get pretty high in some cases (threshold currently as high as 3 for me) which might say something about the separation distance in the svm, not sure tho.

edit flag offensive delete link more

Comments

am reading you bro !... am starting with 2.4.11... i will check your sufggestions

VonTalavang gravatar imageVonTalavang ( 2015-03-17 22:12:01 -0600 )edit

Question Tools

Stats

Asked: 2012-11-19 15:23:21 -0600

Seen: 6,871 times

Last updated: Nov 25 '12