Ask Your Question
4

How to use bag of words example with BRIEF descriptors?

asked 2013-07-24 11:50:05 -0600

PreetiJPillai gravatar image

updated 2020-11-30 03:26:53 -0600

I am trying to run OpenCV's Bag of Words example. So far, it works perfectly just with SIFT and SURF descriptors(which are non free). I am trying other descriptors such as BRIEF and FREAK, but the code breaks every single time. On further debugging, I found that calculateImageDescriptors function calls to kmeans cluster doesn't seem to accept non-SIFT or non-SURF descriptor.

Is there a way to work around this problem?

edit retag flag offensive close merge delete

Comments

what errors do you get ? afaik, surf & sift descriptors are float mats, brief and freak are uchar

berak gravatar imageberak ( 2013-07-24 13:22:41 -0600 )edit

I get this error..

OpenCV Error: Assertion failed (queryDescriptors.type() == trainDescCollection[0].type()) in knnMatchImpl, file /home/ppillai/programs/opencv-2.4.5/modules/features2d/src/matchers.cpp, line 351 terminate called after throwing an instance of 'cv::Exception' what(): /home/ppillai/programs/opencv-2.4.5/modules/features2d/src/matchers.cpp:351: error: (-215) queryDescriptors.type() == trainDescCollection[0].type() in function knnMatchImpl

PreetiJPillai gravatar imagePreetiJPillai ( 2013-07-24 16:18:47 -0600 )edit

Try converting your descriptors fro; CV_8UC1 to CV_32FC1.

StevenPuttemans gravatar imageStevenPuttemans ( 2013-07-24 17:07:08 -0600 )edit
1

I tried the conversion yesterday, but I got a Segmentation fault.

this is the function that results in the segmentation fault:

bowExtractor->compute( colorImage, keypoints, imageDescriptors );

PreetiJPillai gravatar imagePreetiJPillai ( 2013-07-24 18:14:05 -0600 )edit

6 answers

Sort by ยป oldest newest most voted
8

answered 2014-07-16 10:50:51 -0600

warith_h gravatar image

There is a terrible mistake. A simple conversion of a uchar to float is NOT the right solution. If you look at the details of KMeans, one must minimize the sum of distances :

  • for a real-valued vector, this corresponds to a mean

  • for a binary vector stored in uchar, this does NOT correspond to a mean

Indeed for each coordinate inside the uchar one must make a vote between the ones and the zeros to get a medoid.

edit flag offensive delete link more

Comments

2

valid objection, imho.

could you shed more light on how to solve it (i.e. a practical way to cluster binary features) ?

berak gravatar imageberak ( 2014-07-16 11:37:03 -0600 )edit
2

My intuitions seem to be confirmed by this link : http://webdiis.unizar.es/~dorian/index.php?p=32 There is a nice implementation with bitset and the mean operator is redefined correctly. One should do something similar in OpenCV.

warith_h gravatar imagewarith_h ( 2014-07-17 12:15:24 -0600 )edit
2

Any advances on this?

Poyan gravatar imagePoyan ( 2015-03-05 06:22:02 -0600 )edit
5

answered 2013-07-24 18:39:59 -0600

Here is the relevant part of the code to compute BoW with ORB:

for( all your images )
{
    // Detect interesting points
    orb(img, Mat(), keypoints, descriptors);
    // Keep characteristics of images for further clustering.
    characteristics.push_back(descriptors);
}
// Create the BOW object with K classes
BOWKMeansTrainer bow(K);
for( all your descriptors )
{
    // Convert characteristics vector to float
    // This is require by BOWTrainer class
    Mat descr;
    characteristics[k].convertTo(descr, CV_32F);
    // Add it to the BOW object
    bow.add(descr);
}
// Perform the clustering of stored vectors
Mat voc = bow.cluster();

You need to convert features from CV_8U to CV_32F, because bag of words object expects float descriptors. It is not required for SIFT or SURF descriptors because they are in float, as said by @berak.

If you have some issues with this code, please show us some relevant parts of yours.

edit flag offensive delete link more

Comments

Thanks for the helpful post, Mathieu.

However, there are issues if you are using BOWImgDescriptorExtractor, particularly when calling compute() on that class.

If you do as above to build the BoW clusters then create a BOWImgDescriptorExtractor using a BruteForce matcher and ORB extractor, there is an error.

  1. I read in a greyscale image
  2. I compute the keypoints using the DenseFeatureDetector
  3. I call compute() on the BOWImgDescriptorExtractor object which throws the following error:

"OpenCV Error: Assertion failed (queryDescriptors.type() == trainDescCollection[0].type()) in knnMatchImpl, file /Users/jamescharters/Downloads/opencv-2.4.6.1/modules/features2d/src/matchers.cpp, line 351"

Am I missing something easy here? Thanks.

jamescharters gravatar imagejamescharters ( 2013-09-02 03:12:44 -0600 )edit
1

Code for above:

// How the objects are set up

Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create("BruteForce");

Ptr<DescriptorExtractor> extractor = DescriptorExtractor::create("ORB");

Ptr<FeatureDetector> detector = FeatureDetector::create("Dense");

BOWKMeansTrainer bowTrainer(dictionarySize, tc, retries, flags);

BOWImgDescriptorExtractor bowDE(extractor, matcher)

...

// Read in the image

Mat img = imread(entryPath.string(), 1);

cvtColor(img, img, CV_BGR2GRAY);

// Get the keypoints

vector<KeyPoint> keypoints;

detector->detect(img, keypoints);

// Try to get the descriptor

Mat bowDescriptor = Mat(0, 32, CV_32F);

bowDE.compute(img, keypoints, bowDescriptor); // errors here

jamescharters gravatar imagejamescharters ( 2013-09-02 03:43:32 -0600 )edit

Please, edit your question and add the code in it (formatted) otherwise it's hard to read...

Mathieu Barnachon gravatar imageMathieu Barnachon ( 2013-09-02 16:35:29 -0600 )edit
2

answered 2013-10-07 10:32:13 -0600

igsgk gravatar image

Hi,

I had same problem and when I change type of dictionary float to unsigned char it works.

cv::Mat floatdictonary; // floatDictionary = bowTrainer.cluster(floatFeaturesUnclustered);//you have convert features //unsigned char to float cv::Mat udictionary; floatdictionary.convertTo(udictonary,CV_8UC1); // bowDE->setVocabulary(udictonary); Mat bowDescriptor; std::vector<std::vector<int> > pointIdxOfClusters; bowDE.compute(img, keypoints, bowDescriptor,&pointIdxOfClusters);

edit flag offensive delete link more

Comments

Thanks.. I was having great trouble because of this error but using your solution its working fine now.

harrypawar gravatar imageharrypawar ( 2014-02-27 01:31:09 -0600 )edit
0

answered 2013-10-30 08:03:52 -0600

startux gravatar image

updated 2013-10-30 08:05:58 -0600

The vocabulary that is computed using the cluster() function in BOWKMeansTrainer is of type CV_32FC1. The descriptors of the image against which to match the vocabulary is of type CV_8UC1, when computed using non-sift and non-surf descriptor extractors. One gets this error ""OpenCV Error: Assertion failed (queryDescriptors.type() == trainDescCollection[0].type()) in knnMatchImpl, file /Users/jamescharters/Downloads/opencv-2.4.6.1/modules/features2d/src/matchers.cpp, line 351"" when one tries to match descriptors of different types .

edit flag offensive delete link more

Comments

My question is if it is "OK" to make these type of conversions just to make it work ?

startux gravatar imagestartux ( 2013-10-30 08:05:21 -0600 )edit
0

answered 2014-06-30 07:18:17 -0600

thdrksdfthmn gravatar image

To solve all the problems just combine the answers of Mathieu Barnachon and igsgk:

//first convert uchar to float
cv::Mat descriptors;
extractorIn->compute(img, keypoints, descriptors);
cv::Mat descriptorsFloat;
descriptors.convertTo(descriptorsFloat, CV_32FC1);
bowTrainerInOut.add(descriptorsFloat);

// ... and then set the vocabulary as uchar after clustering
cv::Mat dictionary = bowTrainer.cluster();
cv::Mat uDictionary;
dictionary.convertTo(uDictionary, CV_8UC1);
bowDE.setVocabulary(uDictionary);

This solved it for my case

edit flag offensive delete link more
0

answered 2013-09-02 18:38:18 -0600

jamescharters gravatar image

updated 2013-09-02 18:39:55 -0600

Just reposting from the comments above with nice code formatting.

In short, there appears to be an issue when using the BOWImgDescriptorExtractor on the above ORB BoW approach. This might trip up other people, and seems closely related to the above question.

Here is some sample code:

int dictionarySize = 1024;
TermCriteria tc(CV_TERMCRIT_ITER, 10, 0.00001);
int retries = 1;
int flags = KMEANS_PP_CENTERS;

// How the objects are set up
Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create("BruteForce");
Ptr<DescriptorExtractor> extractor = DescriptorExtractor::create("ORB");
Ptr<FeatureDetector> detector = FeatureDetector::create("Dense");
BOWKMeansTrainer bowTrainer(dictionarySize, tc, retries, flags);
BOWImgDescriptorExtractor bowDE(extractor, matcher)

// ...
// ... code here is equivalent to answer above for computing BoW but has been excluded for brevity ... 
// ...

// Read in the image that we want to match against the descriptor
Mat img = imread(entryPath.string(), 1);
cvtColor(img, img, CV_BGR2GRAY);

// Get the keypoints
vector<KeyPoint> keypoints;
detector->detect(img, keypoints);

// Try to get the descriptor
Mat bowDescriptor = Mat(0, 32, CV_32F);
bowDE.compute(img, keypoints, bowDescriptor); // <-- errors here

And the error is:

"OpenCV Error: Assertion failed (queryDescriptors.type() == trainDescCollection[0].type()) in knnMatchImpl, file /Users/jamescharters/Downloads/opencv-2.4.6.1/modules/features2d/src/matchers.cpp, line 351"

I am doing this in C++ using OpenCV 2.4.6.1 on Mac OS 10.8.4.

edit flag offensive delete link more

Comments

Same problem on Ubuntu OpenCV 2.4.8

thdrksdfthmn gravatar imagethdrksdfthmn ( 2014-06-30 06:34:25 -0600 )edit

Question Tools

3 followers

Stats

Asked: 2013-07-24 11:50:05 -0600

Seen: 17,229 times

Last updated: Jul 16 '14