Ask Your Question
2

SURF and SIFT detect different features depending on how the application is run

asked 2013-08-09 14:47:26 -0600

HeywoodFloyd gravatar image

updated 2013-08-10 05:12:48 -0600

SR gravatar image

I'm seeing strange behavior for cv::SurfFeatureDetector::detect and cv::SiftFeatureDetector::detect. I've got a camera pointing to a monitor, and I need to know how the camera image coordinates correspond to the screen coordinates. My approach is to put an image on the screen, and grab a camera shot. I then detect features of the screen image and the camera image, find matching feature pairs, and use their coordinates to extract a homography matrix.

The problem occurs in feature detection. Depending on how the application is run, I get different sets of features for the camera image. Visually, the camera images are indistinguishable but I get different feature sets if I run the app by double-clicking on it, or running it from the debugger, or if I run it from a different account. This is all on the same computer with the same camera/monitor, running the exact same code; not a copy.

I've tried relaxing/tightening various parameters of SIFT and SURF, but always results differ depending on how the application is run.

I'm using OpenCV 2.4.5, building with Visual Studio 2010 and running on a Windows 7 Pro, 64-bit, although I'm building the OpenCV application as a 32-bit.

Has anybody else seen this behavior?

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
2

answered 2013-08-10 05:11:10 -0600

SR gravatar image

That's normal. Use a still image to test whether your application works fine.

Camera frames differ by image noise. The frames may be visually identical, but looking closely at the pixel will show that they are not. The local feature detections (although they are designed for stability and robustness) do change due to this noise. If you need stable detections you might rise the controlling thresholds but you will end up with a handful of features only which is in most applications insufficient.

If you are working on video you might track features over time and remove those that vary in order to determine the stable features.

edit flag offensive delete link more

Comments

Instead of tracking features over time, I tried averaging together several camera images to reduce the effects of transient noise, and then blurring the images so that only large features would be left. Same results: it will compute a valid homography matrix only when the application is run a certain way for a single account.

HeywoodFloyd gravatar imageHeywoodFloyd ( 2013-08-13 09:52:48 -0600 )edit

Sort-of figured out what the problem was. For some reason, sometimes when the program ran, the camera image was flipped. I just put in an option to re-flip the image.

However, this brings up two other questions (which, fortunately, I don't really have to solve) 1) Why does the camera image flip depending on how the program is called (I'm using OpenCV calls to read the camera, so it may be an OpenCV problem, or it might be a camera driver problem) 2) Why aren't similar features found for a flipped image? Shouldn't SURF (or SIFT or FREAK or whatever) find matching features, regardless of how the image is oriented? Isn't that one of the uses?

HeywoodFloyd gravatar imageHeywoodFloyd ( 2013-08-13 15:21:43 -0600 )edit

Camera frames differ by image noise.

SR gravatar imageSR ( 2013-08-24 03:54:42 -0600 )edit

Question Tools

Stats

Asked: 2013-08-09 14:47:26 -0600

Seen: 935 times

Last updated: Aug 10 '13