Ask Your Question
0

Comparing and matching images

asked 2013-03-16 10:46:38 -0600

simba1382 gravatar image

I am looking to compare a new image to a database of images, and then output the higher "similarity". The images I want to compare are similar, but the problem is though because they're not pixel by pixel equal. I've tried to use BoW (Bag Of Words) model already, as per recommendation, I tried various implementations without success, the best correct rate I got was 30%, which is something really low.

Let me show you what I am talking about: imgur gallery with 5 example images. I want to detect that the four initial images are equal, and the fifth one is different. I wouldn't mind only detecting that the ones with the same angle orientation are equal, though. (In my example 2, 3 and 4)

So, that being said, are there any better methods than BoW for that? Or perhaps BoW should be enough if I implemented in a different way?

Thanks in advance.

edit retag flag offensive close merge delete

2 answers

Sort by ยป oldest newest most voted
1

answered 2013-03-17 14:21:00 -0600

Matching can be done in a lot of different ways, basically BoW is a basic approach for pixelwise comparison. You could do a histogram comparison approach, or go further and apply keypoint feature matching using SIFT, SURF, ORB, FREAK, BRIEF, ...

So try some stuff out, using this usefull links

edit flag offensive delete link more

Comments

1

Thanks, I did already try some methods to be honestly, histogram is pointless since the ponies (as you can see in the examples) are grayscale (plus the filter, of course). I never heard of ORB, FREAK and BRIEF, though, and I'm gonna try them and see if I get some result.

Thanks.

simba1382 gravatar imagesimba1382 ( 2013-03-17 17:03:22 -0600 )edit

A great advantage of feature based matching is the scale and rotation invariance where you can deal with. This is a lot harder when using the BoW approach.

StevenPuttemans gravatar imageStevenPuttemans ( 2013-03-18 08:04:50 -0600 )edit
2

answered 2013-03-17 18:02:03 -0600

Guanta gravatar image

Some ideas for enhancing your BoW-performance:

  1. Your BoW-Descriptors probably look too similar. Have you incorporated some kind of locality? E.g. by computing 4 BoW-features in each square of the image? With the pure BoW approach you'll lose the locality of features. More ideas on how to improve BoW can be found in these pretty good slides: http://people.rennes.inria.fr/Herve.Jegou/courses/2012_cpvr_tutorial/3-bagofwords.ppt.pdf

  2. Maybe your features are not discriminative enough. You could maybe combine different features like SIFT+LBP like in: http://icmll.buaa.edu.cn/members/jing.yu/YuanYuQinWan.pdf . My idea would be to try out just HOG features since they are directly related to the orientation in which you are interested in.

  3. Which classifier did you use? Try different ones (if you are lazy to adjust them using OpenCV you could use WEKA or python-scipy just for the classification task)!

Maybe you could detect something else which could be discriminating your two classes, e.g. try to detect the eyebrows of the horses in the image and see if they are closed, or other similarities/ dissimilarities in your images (which is from these 5 images hard to tell). You could also try to detect the shapes and make a kinda shape matching.

edit flag offensive delete link more

Question Tools

1 follower

Stats

Asked: 2013-03-16 10:46:38 -0600

Seen: 4,524 times

Last updated: Mar 17 '13