open cv, vector<DMatch> matches, xy coordinates
Dear all,
i am using SIFT to recognize the image comparing to web-cam video.
colorImg.setFromPixels(vidGrabber.getPixels(), 320,240);
grayImage = colorImg;
// take the abs value of the difference between background and incoming and then threshold:
//grayDiff.absDiff(grayBg, grayImage);
//grayDiff.threshold(threshold);
// find contours which are between the size of 20 pixels and 1/3 the w*h pixels.
// also, find holes is set to true so we will get interior contours as well....
//contourFinder.findContours(grayDiff, 20, (340*240)/3, 10, true); // find holes
}
queryImg = cv::imread("..\\Images\\1.bmp", CV_LOAD_IMAGE_GRAYSCALE);
// trainImg = cv::imread("..\Images\2.bmp", CV_LOAD_IMAGE_GRAYSCALE); trainImg = grayImage.getCvImage();
if(queryImg.empty() || trainImg.empty())
{
printf("Can't read one of the images\n");
}
// Detect keypoints in both images.
SurfFeatureDetector detector(800);
detector.detect(queryImg, queryKeypoints);
detector.detect(trainImg, trainKeypoints);
// Print how many keypoints were found in each image.
printf("Found %d and %d keypoints.\n", queryKeypoints.size(), trainKeypoints.size());
// Compute the SIFT feature descriptors for the keypoints.
// Multiple features can be extracted from a single keypoint, so the result is a
// matrix where row 'i' is the list of features for keypoint 'i'.
SiftDescriptorExtractor extractor;
Mat queryDescriptors, trainDescriptors;
extractor.compute(queryImg, queryKeypoints, queryDescriptors);
extractor.compute(trainImg, trainKeypoints, trainDescriptors);
// Print some statistics on the matrices returned.
cv::Size size = queryDescriptors.size();
printf("Query descriptors height: %d, width: %d, area: %d, non-zero: %d\n",
size.height, size.width, size.area(), countNonZero(queryDescriptors));
size = trainDescriptors.size();
printf("Train descriptors height: %d, width: %d, area: %d, non-zero: %d\n",
size.height, size.width, size.area(), countNonZero(trainDescriptors));
// For each of the descriptors in 'queryDescriptors', find the closest
// matching descriptor in 'trainDescriptors' (performs an exhaustive search).
// This seems to only return as many matches as there are keypoints. For each
// keypoint in 'query', it must return the descriptor which most closesly matches a
// a descriptor in 'train'?
BruteForceMatcher<L2<float>> matcher;//NORM_L2);//NORM_L2);
//BruteForceMatcher matcher= cv::BruteForceMatcher(cv::NORM_L2, crossChecking=True);
matcher.match(queryDescriptors, trainDescriptors, matches);
printf("Found %d matches.\n", matches.size());
===================================================================== how can i get the x,y coordinations of the matches point thanks!
Where is "matches" defined?
here is the .h class testApp : public ofBaseApp{
#ifdef _USE_LIVE_VIDEO ofVideoGrabber vidGrabber; #else ofVideoPlayer vidPlayer; #endif
};
Are you referring to this DMatch: http://docs.opencv.org/java/org/opencv/features2d/DMatch.html
I can't find the coordinate of matched points
Yes,it refers to that Dmatch
I think the x and y coordinates of the matches are:
Check it to make sure.