The right parameters setting to test feature detectors
Results of the evaluation of feature detectors depend on the values we give to their parameters, like the threshold , the number of octaves... How can I know which values are best? Some articles say that they use the values that the detectors' authors have used to evaluate them, but what are these values for the FAST, SURF and BRISK?
Simply said, you need a ground truth set and a test set of your matches outcome. There are several ways to define optimal parameters for a feature detecture, like using ROC curves, precision-recall curves. I think you need to go dig deeper into evaluation frameworks of machine learning techniques.
Simple google gave me this very interesting publication on the topic!
Thanks StevenPuttermans for your answer. Speaking of ground truth set, I found this term all lot in articles, what does it mean exactly?
for example if you are detecting an object using feature descriptors, you assign a ground truth (location) to the object in the image. When performing your detection then, you can check if the found detection is actually a truthfull one. Ground truth means what is really there, and it has to be compared to the output of any algorithm.