Ask Your Question
2

Chamfer Matching

asked 2015-01-06 14:56:19 -0600

fortes_23 gravatar image

updated 2018-10-04 23:42:27 -0600

I'm new using OpenCV. I have started a project for recognitizing animals and for starting with the library I'm trying to recognize shapes. For this, I found this method of Chamfer Matching:

" 1. You do an edge detection (for example, cvCanny in opencv) 2. You create a distance image, where the value of each pixel means the distance frome the nearest edge. 3. You take the shapes you would like to detect, define sample points along the edges of the shape, and try to match these points on the distance image. Basically you just add the values on the distance image which are "under" the coordinates of your objects. 4. Find a good minimization algorithm, the effectiveness of this depends on your application."

Right now, I have this program:

int main() {

//Open the image
Mat image = imread("root");

//Open the shape I want to detect
Mat templ = imread("root");

//There could be a test to check that the images have loaded

//Inicialize the contours image and the gray one
Mat contours;
Mat gray_image;

//Obtain gray image
cvtColor(image, gray_image, CV_RGB2GRAY);

// Reduce the noise with kernel a 3x3 before the canny
blur(image, image, cv::Size(3, 3));
blur(templ, templ, cv::Size(3, 3));

//Canny edge
Canny(image, contours, 10, 350, 3);
Canny(templ, templ, 5, 350, 3);

// Create a new image, for the distance to the nearest edge 
Mat img(image.rows, image.cols, CV_8UC3);

//Change the colour Black->White and White->Black of the canny edge
for (int i = 0; i < contours.rows; i++) {
    for (int j = 0; j < contours.cols; j++) {
        contours.at<Vec3b>(i, j)[0] = 255 - contours.at<Vec3b>(i, j)[0];
        contours.at<Vec3b>(i, j)[1] = 255 - contours.at<Vec3b>(i, j)[1];
        contours.at<Vec3b>(i, j)[2] = 255 - contours.at<Vec3b>(i, j)[2];
        //cout << contours.at<cv::Vec3b>(i, j) << endl;
    }
}

//Calculates the distance to the closest zero pixel for each pixel of the source image
distanceTransform(contours, img, CV_DIST_L2, 3);

//Normalizes the norm or value range of an array
normalize(img, img, 0.0, 1.0, CV_MINMAX);
/*
 * If we want to see the pixels' values
    for (int i = 0; i < contours.rows; i++) {
        for (int j = 0; j < contours.cols; j++) {
            cout << img.at<cv::Vec3b>(i, j) << endl;
        }
    }
 */

// Show the images
namedWindow("Image");
imshow("Image", image);
namedWindow("Gray");
imshow("Gray", gray_image);
namedWindow("canny2");
imshow("canny2", contours);
namedWindow("pixel");
imshow("pixel", img);
namedWindow("templ2");
imshow("templ2",templ);

while (waitKey(33) != 27) {
}

}

I have done the first two steps, but I don't understand the third one well. What should I do now? I have tried to use the void chamerMatching(), but I always obtain the error "double free or corruption (!prev)". Could anyone help me?

Thanks

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
1

answered 2015-04-21 01:30:32 -0600

sam123 gravatar image

to solve the problem with the chamerMatching() function, you can remove the following line in chamfermatching.h

delete templates[i];

however, I'm not sure if you get memory leaks, but it works.

edit flag offensive delete link more

Question Tools

1 follower

Stats

Asked: 2015-01-06 14:56:19 -0600

Seen: 8,711 times

Last updated: Jan 07 '15