Ask Your Question
0

use SIFt detector with SURF algorithm for image matching

asked 2014-07-28 18:25:01 -0600

Nabeel gravatar image

Hi,

I am trying to match few training images with query images using features.

have used SIFT detector to find key points from images. Then, I passed that key point structure to SURF algorithm to compute descriptors from images. But matching results are very poor

On the other hand, if I extract key point and descriptors both using SURF algorithm then image matching works fine.

Can anyone guide me what I am doing wrong ? ? Any help will be really appreciated

edit retag flag offensive close merge delete

2 answers

Sort by ยป oldest newest most voted
1

answered 2014-07-29 04:03:20 -0600

JohannesZ gravatar image

First of all, SURF is not better than SIFT, most papers and evaluations will confirm this. Second, the incompatibily comes with the fact that both algorithms use different strategies for computing orientations and scale of the interest points aka keypoints. You can try to find methods to "convert" these attributes but using them on the fly will not give you good results as you have already seen.

By the way, SURF incrases the size of the box filters to accomplish the scale space approach of SIFT using gaussian convolution. The intergral images are mainly used for performance reasons.

edit flag offensive delete link more

Comments

JohannesZ is right (in some sense at least). The incompatibility is due to the different internal handling of keypoints, sift packs octave, level and scale differently than other keypoint descriptors do. In the answer to the following bug-report a more complete description can be found: http://code.opencv.org/issues/2987#note-4 (Another related issue: http://code.opencv.org/issues/3287#note-4 ). Thus, in the current implementation it is not advisable to use SIFT keypoint detector in combination with another feature descriptor. I'd appreciate if s.o. would find the time to fix that and make a pull request.

Guanta gravatar imageGuanta ( 2014-07-29 08:46:51 -0600 )edit
0

answered 2014-07-28 18:37:15 -0600

FLY gravatar image

updated 2014-07-28 19:04:00 -0600

Surf is better then sift , but it depends on the images you are using , if you query images are nearly same or less in amount then your going right , but if they are different and large in amount then you need to change the algorithms , But your question about keypoints using SIFT and descriptor of SURF is not giving good result is true , how you can take keypoints from one image using one algorithm and find descriptor from the same image using same keypoints with other algorithm , to me this is invalid , BUT its not 100% right because there is always a room in programming where you can deal with many impossibilities through logics :)

EDIT :

SIFT computes an image pyramid by convolving the image several times with large Gaussian kernels, while SURF accomplishes an approximation of that using integral images.

edit flag offensive delete link more

Comments

Both SIFT and SURF require orientation to compute description. SIFT algorithm also provides that information ... so what is the problem then ?

Nabeel gravatar imageNabeel ( 2014-07-28 18:47:46 -0600 )edit

there must be any thing different in both algorithms , how you can expect the same keypoints and descriptors from the two different algorithms ? Well i didnt do much work on them though i did 2 projects using them but you should wait for a expert to answer as well !!

FLY gravatar imageFLY ( 2014-07-28 18:59:27 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2014-07-28 18:25:01 -0600

Seen: 585 times

Last updated: Jul 29 '14