Ask Your Question
1

Compute SURF/SIFT descriptors of non key points

asked 2015-05-18 12:12:02 -0600

maystroh10 gravatar image

Actually, I'm trying to match a list of key points extracted from an image to another list of key points extracted from another image. I tried SURF/SIFT to detect the key points but the results were not as expected in terms of accuracy of the keypoints detected from each image. I thought about not using key point detector and just use the points of the connected regions then compute the descriptors of these points using SIFT/SUFT but most of times calling the compute method will empty the keypoint list.

Sample of code is below:

int minHessian = 100;
 SurfFeatureDetector detector(minHessian);  
Mat descriptors_object;
 SurfDescriptorExtractor extractor;
 detector.detect( img_object, keypoints_object); 
 extractor.compute( img_object, keypoints_object,descriptors_object ); 
 for (int index = 0; index < listOfObjectsExtracted.size(); index ++)
 {
         Mat partOfImageScene = listOfObjectsExtracted[index];
         vector<Point2f> listOfContourPoints = convertPointsToPoints2f(realContoursOfRects[index]);
         vector<KeyPoint> keypoints_scene;
         KeyPoint::convert(listOfContourPoints, keypoints_scene, 100, 1000);
         //detector.detect( partOfImageScene, keypoints_scene );
         if (keypoints_scene.size() > 0)
         {
             //-- Step 2: Calculate descriptors (feature vectors)
             Mat descriptors_scene;
             extractor.compute( partOfImageScene, keypoints_scene, descriptors_scene );
            //Logic of matching between descriptors_scene and descriptors_object 
         } 
}

So, after calling compute in Step 2, the keypoints_scene most of the times becomes empty.

I know they state the following in OpenCV documentation:

Note that the method can modify the keypoints vector by removing the keypoints such that a descriptor for them is not defined (usually these are the keypoints near image border). The method makes sure that the ouptut keypoints and descriptors are consistent with each other (so that the number of keypoints is equal to the descriptors row count).

But anyway to get better results? I mean to have descriptors for all the points I've chosen? Am I violating the way the keypoints should be used? Should I try different feature extractor than SIFT/SURF to get what I want? Or it's expected to have the same kind of problem with every feature detector implemeted in OpenCV?

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
1

answered 2015-05-20 16:38:25 -0600

maystroh10 gravatar image

I figured it out. The problem was in the way I'm computing the descriptors because as you can see in the code above, I was trying to compute the descriptors on small part of the image and not on the image itself. So when I put the image itself instead of partOfImageScene, something like extractor.compute( img_scene, keypoints_scene, descriptors_scene ); it worked perfectly and I didn't lose any keypoints from the list I had.

edit flag offensive delete link more

Question Tools

1 follower

Stats

Asked: 2015-05-18 12:12:02 -0600

Seen: 694 times

Last updated: May 20 '15