Ask Your Question
0

Object Detection using Surf

asked 2013-07-11 05:17:50 -0600

FLY gravatar image

updated 2013-07-12 05:40:39 -0600

I am trying to detect the vehicle from the video , I 'll do it in real time application but for the time being and for good understanding i am doing it on video , code is below:

Mat img_template = imread("images.jpg"); // read template image
void surf_detection(Mat img_1,Mat img_2)
{ 

if( !img_1.data || !img_2.data )
{ 
    std::cout<< " --(!) Error reading images " << std::endl; 

}
//-- Step 1: Detect the keypoints using SURF Detector
int minHessian = 400;
SurfFeatureDetector detector( minHessian );
std::vector<KeyPoint> keypoints_1, keypoints_2;
std::vector< DMatch > good_matches;

do{ 

detector.detect( img_1, keypoints_1 );
detector.detect( img_2, keypoints_2 );

//-- Draw keypoints

Mat img_keypoints_1; Mat img_keypoints_2;
drawKeypoints( img_1, keypoints_1, img_keypoints_1, Scalar::all(-1), DrawMatchesFlags::DEFAULT );
drawKeypoints( img_2, keypoints_2, img_keypoints_2, Scalar::all(-1), DrawMatchesFlags::DEFAULT );

//-- Step 2: Calculate descriptors (feature vectors)
SurfDescriptorExtractor extractor;
Mat descriptors_1, descriptors_2;
extractor.compute( img_1, keypoints_1, descriptors_1 );
extractor.compute( img_2, keypoints_2, descriptors_2 );


//-- Step 3: Matching descriptor vectors using FLANN matcher
FlannBasedMatcher matcher;
std::vector< DMatch > matches;
matcher.match( descriptors_1, descriptors_2, matches );
double max_dist = 0; 
double min_dist = 100;

//-- Quick calculation of max and min distances between keypoints
for( int i = 0; i < descriptors_1.rows; i++ )
{ 
    double dist = matches[i].distance;
if( dist < min_dist )
    min_dist = dist;
if( dist > max_dist ) 
    max_dist = dist;
}

std::cout("-- Max dist : %f \n", max_dist );
std::cout("-- Min dist : %f \n", min_dist );

//-- Draw only "good" matches (i.e. whose distance is less than 2*min_dist )
//-- PS.- radiusMatch can also be used here.



for( int i = 0; i < descriptors_1.rows; i++ )
{ 
    if( matches[i].distance < 2*min_dist )
        { 
                good_matches.push_back( matches[i]);
        }
}

}while(good_matches.size()<100);

//-- Draw only "good" matches
Mat img_matches;
drawMatches( img_1, keypoints_1, img_2, keypoints_2,good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );

//-- Localize the object
std::vector<Point2f> obj;
std::vector<Point2f> scene;
for( int i = 0; i < good_matches.size(); i++ )
{
//-- Get the keypoints from the good matches
obj.push_back( keypoints_1[ good_matches[i].queryIdx ].pt );
scene.push_back( keypoints_2[ good_matches[i].trainIdx ].pt );
}

// se non trova H....sarebbe da usare la vecchia H e disegnarla con un colore diverso
Mat H = findHomography( obj, scene, CV_RANSAC );


//-- Get the corners from the image_1 ( the object to be "detected" )
std::vector<Point2f> obj_corners(4);
obj_corners[0] = cvPoint(0,0); 
obj_corners[1] = cvPoint( img_1.cols, 0 );
obj_corners[2] = cvPoint( img_1.cols, img_1.rows ); 
obj_corners[3] = cvPoint( 0, img_1.rows );
std::vector<Point2f> scene_corners(4);


perspectiveTransform( obj_corners, scene_corners, H);


//-- Draw lines between the corners (the mapped object in the scene - image_2 )
  line( img_matches, scene_corners[0] + Point2f( img_1.cols, 0), scene_corners[1] + Point2f( img_1.cols, 0), Scalar(0, 255, 0), 4 );
  line( img_matches, scene_corners[1] + Point2f( img_1.cols, 0), scene_corners[2] + Point2f( img_1.cols, 0), Scalar( 0, 255, 0), 4 );
  line( img_matches, scene_corners[2] + Point2f( img_1.cols, 0), scene_corners[3] + Point2f( img_1.cols, 0), Scalar( 0, 255, 0), 4 );
  line( img_matches, scene_corners[3] + Point2f( img_1.cols, 0), scene_corners[0] + Point2f( img_1.cols, 0), Scalar( 0, 255, 0), 4 );

I am getting the following output

2 pictures of cars

and

enter image description here

But my question is why its not drawing rectangle on the object which is detected like:

Rectangle is visible on detected object

I am doing ... (more)

edit retag flag offensive close merge delete

Comments

1

you are mixing old and new OpenCV interface.. you should rewrite it from the scratch using only the new C++ interface

yes123 gravatar imageyes123 ( 2013-07-11 05:52:33 -0600 )edit

is it effect the output ?

FLY gravatar imageFLY ( 2013-07-11 07:03:16 -0600 )edit

2 answers

Sort by ยป oldest newest most voted
3

answered 2013-07-11 19:17:31 -0600

I think it doesn't draw the rectangle because the perspective transformed isn't found. Check the transformed points as well as the matrix H. By the way, you should probably try to display a smaller bounding box instead of the full image, which seems almost the same size as destination image.

Some comments on the algorithmic part: 1) use reference as much as you can, and protected them with const if needed. 2) Compute only one time the keypoints and descriptors of your reference image, and give the results to the matching function; as the reference image is never changing, no need to recomputed keypoints and descriptors all the time. You will speed up your results. 3) Are you sure, your approach is appropriated? An object detection approach, with machine learning stage will probably give better results on live stream; train it with multiple view of the car (or cars if you want to retrieve different cars). See SVM tutorial for example.

And last, but not least, don't send all your code, only interesting parts. If it lacks some information, people will ask for the missing parts.

edit flag offensive delete link more
0

answered 2013-07-11 17:25:22 -0600

bluekid gravatar image

updated 2013-07-11 17:25:55 -0600

void surf_detection(Mat img_1,Mat img_2);

you cant get out img_2 you only send a copy of img_2 So change function like

void surf_detection(Mat img_1,Mat& img_2)

edit flag offensive delete link more

Question Tools

Stats

Asked: 2013-07-11 05:17:50 -0600

Seen: 3,366 times

Last updated: Jul 12 '13