When detecting keypoints (such as BRISK, ORB, etc.), I get coordinates with subpixel accuracy (ex.: pt.x = 110.645 , pt.y = 285.432). While I am familiar with the concepts of subpixels, I wonder the location of the keypoint is a float versus an int (rounded up/down) value, such as pt.x=111 and pt.y=285. Ok, I could simply cast the float to an int but that doesn't answer why.
I mean, when the detection algorithm search for a keypoint, it first selects a pixel, then applies various tests in order to determine whether the pixel and the patch around it is really a keypoint according to the established criterion of the method. I know it retrieves the orientation of the keypoint, which might be a float in itself. But even looking at the code or the AGAST or BRISK paper, humbly I don't understand what is the point of using subpixels for the location of the keypoint.
But since it is the way it is implemented in OpenCV (3 for me, but I guess it is the same in 2.4.X), I assume there is a good reason! I might just have misread portions of the paper or missed something in the comments of the code...
Thanks!