What is the most efficient SIFT implementation?
So far, I know three implementations of the SIFT detector/descriptor:
- OpenCV
- VLFeat
- OpenSift
What is the most efficient implementation? What is the most accurate? If you know any better implementation, please post it.
Since all these implementations are single thread, I want to parallelize them (probably using OpenMP).
Well, I would suggest you to test everyone of them with a sample Image set and measure their performance and precision. You know best, which environment you will use the Algorithm (like compiler optimisations etc. etc.) and how you actually implement it. You could probably use just the samples from each library. In my case e.g. many people suggested me to use VLFeat, but when I used it for intended application (Android Augmented Reality) it showed, that OpenCV works by far better, because it is quite easy to combine other faster detectors e.g. Harris or FAST, with the SIFT description.
Thanks for your answer. Actually I'm a little bit confused since I thought that Harris corner detection was the standard detection algorithm for SIFT. In this benchmark: http://www.vlfeat.org/benchmarks/over... they talk about "Harris Laplace, Hessian Laplace and SIFT" and I don't understand the difference or how to do it in VLFeat. Can you please help me?
I'll have to be honest with you, I'm not that into the VLFeat anymore sorry! The detection used by SIFT works the following, it computes Difference of Gaussian and locates the Extrema there. If a KeyPoint is found over other scale spaces its size is bigger than others etc. But (in opencv) you can use already detected KeyPoints to assign their orientation and calculate the descriptors. So you could use for example Hessian or FAST for detection. The Problem with the DoG detection is, that it takes up a little time compared to Hessian, Harris or FAST, and it also computes (without a proper outlier rejection) not much better precision than the others.