Ask Your Question
0

How to calculate scale factor between two images (by SIFT matching)

asked 2019-06-03 20:09:12 -0600

JimZhou001 gravatar image

Since SIFT is scale-invariant, can we get the scale factor by SIFT matching? And how can we do that?

Thank you.

edit retag flag offensive close merge delete

3 answers

Sort by ยป oldest newest most voted
1

answered 2019-06-04 08:47:07 -0600

kbarni gravatar image

updated 2019-06-04 08:47:51 -0600

After matching the keypoints calculate the homography matrix. There are lots of tutorials on feature matching and homography estimation, for example here and here.

Now, in the homography matrix, the X and Y scale factor will correspond to the values at [0,0] and [1,1] positions of the matrix.

The translation is at positions [2,0] and [2,1]; skewing factor at [1,0] and [0,1] etc.

edit flag offensive delete link more

Comments

Thank you very much. But how about cases with a lot of false matching? Actually, my data is with a lot of repetitions, so I can't identify the correspondences exactly. And this is why I asked this question. Though there is ambiguity, I think the scale factor will not change. So I want to get the factor by feature matching.

JimZhou001 gravatar imageJimZhou001 ( 2019-06-04 20:42:59 -0600 )edit

The homography estimation will try to get the best pair of SIFT points to match the images. If it doesn't work, you have to find another method to match the images.

Anyway, your problem is not clear; maybe if you add an example image, it will be easier to answer.

kbarni gravatar imagekbarni ( 2019-06-05 08:45:43 -0600 )edit
0

answered 2019-06-05 13:03:55 -0600

Eduardo gravatar image

updated 2019-06-05 13:06:27 -0600

See the Computer Vision: Algorithms and Applications book by Richard Szeliski. There is a chapter about 2D transformations:

image description


If you can assume that the two objects follow a similarity transformation, you have:

image description

And you can use findHomography() in a robust scheme to discard the outliers. You should get H that corresponds to this transformation with the last row being [0,0,1].

Or more generaly, the affine transformation with some specific operations:

image description

In this case, you should be able to decompose the affine transformation, e.g. here or here.


Most likely you are dealing with pictures of object taken by different viewpoints. In this case, the induced transformation in the image plane is a perspective transform or homography:

image description

For some specific camera motions, the homography will encode a simplified transformation. But generally this should not be true. Also, the homography holds if:

  • the considered object is planar
  • or the camera motion is purely a rotation

See this tutorial: Basic concepts of the homography explained with code for additional information.

edit flag offensive delete link more
0

answered 2019-06-04 08:13:49 -0600

Kafan gravatar image

Why not? Easy approach would be to get the keypoints from the both the image and section them into segments, to prevent false matching. You can start with 2x2 sections. Maybe 3x3 or 3x4 given how big is your image and expected aspect ratio. Now match corresponding section's keypoint based descriptor with each other. Play with the strictness of the matches. Now when you have the matches. Scale is easy to calculate in both direction X and Y, by comparing the distance between the matched keypoints in corresponding images.

edit flag offensive delete link more

Comments

Thank you very much. But how can we section the key points? Actually, my data is with a lot of repetitions, so I can't identify the correspondences exactly. And this is why I asked this question. Though there is ambiguity, I think the scale factor will not change.

JimZhou001 gravatar imageJimZhou001 ( 2019-06-04 20:39:16 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2019-06-03 20:09:12 -0600

Seen: 3,730 times

Last updated: Jun 05 '19