1 | initial version |
Your time comparison is very interesting! However, the time comparison (also those from the papers) doesn't really reflect the algorithmic complexity. For example it just might be that OpenCV's version of SIFT is better optimized than the version that the authors of FREAK use).
Also please note that the most power of binary descriptors doesn't come from their computational time (of course this is a factor, too, especially in time critical applications like tracking). Instead the biggest advantage comes from their binary nature. Consequently,
they are typically very small resulting in a small memory footprint and
they are typically very fast to match (Hamming distance). The Freak-paper also states (Caption Table 1): "The computation times correspond to the description and matching of all keypoints." Thus, not only the computation of the descriptors but also the matching process is involved.