Ask Your Question
4

ORB pyramid not working

asked 2013-10-07 05:19:41 -0600

SimonH gravatar image

updated 2014-01-17 03:30:52 -0600

I am using the following detector and descriptor for the features I want to track:

Ptr<FeatureDetector> detector = Ptr<FeatureDetector>(
    new ORB(500, 2, 9, 31, 0, 2, ORB::HARRIS_SCORE, 31));
Ptr<DescriptorExtractor> descriptorExtractor = DescriptorExtractor::create("ORB");

So I have 9 size pyramids with distance 2 so every smaller pyramid-image is 50% of the last one right? And this 9 times (I tried it also with other numbers like the default 1.2 for the distance e.g.)

Still if I use a marker-image I want to detect in a current frame and the marker in the current frame is only 50% the size of the original marker then there are no matches anymore, the matches are only found for scale changes like 0.8 to 1.2 to the original marker.

Now if I scale the image manually down to 50% of its size it can be found again so a manuall pyramid approach is working but not the one directly done by the Orb detector. Am I doing something wrong here?

Here are some images I took to exmplain the problem:

Marker and scene have same size, everything is fine:

alt text

Different scale of marker and current scene, no matches anymore:

alt text

2 Images where i resized the marker image manually and then its found again as expected:

alt text

alt text

edit retag flag offensive close merge delete

2 answers

Sort by ยป oldest newest most voted
1

answered 2014-01-16 07:58:07 -0600

jensenb gravatar image

It is difficult to say why you are not getting are not getting a "match" of your template (marker) in your image without knowing more about what your feature matching strategy, how are you matching features? How are you extracting ORB key points in the template image, are you also performing multi scale key point detection there?

To better debug your problem, you should first draw the detected key points (scaled to the correct octave). If you are not detecting the key points at equivalent locations in both the template and the target image, of course you will not be able to get meaningful match results. Also, 9 octaves a bit excessive, as the image is only ~0.4% of its original size at the 9th octave. At that size there likely isn't much meaningful information left in the image. Remember that in perspective projection an object's size is inversely related to its distance to the camera. In this non linear relationship, an object's size in the image constantly decreases as it moves away from the camera, but the rate of change decreases the further away from the camera the object is located. Likely 3 or 4 octaves will be enough for your purposes.

edit flag offensive delete link more
0

answered 2014-01-17 05:19:47 -0600

Anas gravatar image

I actually experience the same problem with matching ORB on different scales.

@jensenb: even if 9 levels were used, it might take more time to process but at the end the features on the 3rd or 4th level should match. shouldn't they?

I corss-match each image with the other using the KNN-2 BruteForce-Hamming, then I filter the matches based on the ratio-test between the first and the second neighbor with a ratio equals to 0.6. And I never could find a match with such distance.

If you have managed to get this working before can you summarize the steps mentioning the detector parameters and the matching procedure?

edit flag offensive delete link more

Comments

If there is still meaningful image information at the 3rd or 4th octave then yes they should be able to match the same as the 1st ocatve. My point was that using some many octaves only makes sense for very large images (image size at octave 9 is over 100 times smaller than original), where you also expect that your object depth relative to the camera varies greatly, to appear 100 times smaller the object would have be 100 times farther away.

jensenb gravatar imagejensenb ( 2014-01-17 06:37:51 -0600 )edit

I have used multiscale ORB successfully for pose estimation successfully in a couple of applications of mine. Without seeing your images it is hard speculate what the error might be. First thing to check is that corresponding key point locations were detected in both images. Make sure that the multiscale dection and descriptor extractions was run in both images. Also you should turn off match filtering for now, maybe your theshold of 0.6 is too low.

jensenb gravatar imagejensenb ( 2014-01-17 06:41:43 -0600 )edit

Question Tools

Stats

Asked: 2013-10-07 05:19:41 -0600

Seen: 1,316 times

Last updated: Jan 17 '14