Ask Your Question
1

How to extract Angle , Scale , transition and shear for rotated and scaled object

asked 2016-11-28 11:47:11 -0600

essamzaky gravatar image

updated 2016-12-01 06:59:02 -0600

Problem description

I have rotated and scaled scene and need to correct the scale and rotation , then find rectangle known object in the last fixed image

Input

-Image scense from Camera or scanner
-Normalized (normal scale and 0 degree rotation )templeate image for known object

Requiered output

1-correct the scale and the rotation for the input scene
2-find rectnagle 

the following figure explain what is the input and steps to find the output

image description

I 'm using the following sample [Features2D + Homography to find a known object] (http://docs.opencv.org/2.4/doc/tutori...) to find rotated and scaled object .

I used the following code to do the process

//read the input image      
Mat img_object = imread( strObjectFile, CV_LOAD_IMAGE_GRAYSCALE );
Mat img_scene = imread( strSceneFile, CV_LOAD_IMAGE_GRAYSCALE );
if( img_scene.empty() || img_object.empty())
{
    return ERROR_READ_FILE;     
}   
//Step 1 Find the object in the scene and find H matrix
//-- 1: Detect the keypoints using SURF Detector
int minHessian = 400;
SurfFeatureDetector detector( minHessian );
std::vector<KeyPoint> keypoints_object, keypoints_scene;
detector.detect( img_object, keypoints_object );
detector.detect( img_scene, keypoints_scene );

//-- 2: Calculate descriptors (feature vectors)
SurfDescriptorExtractor extractor;
Mat descriptors_object, descriptors_scene;
extractor.compute( img_object, keypoints_object, descriptors_object );
extractor.compute( img_scene, keypoints_scene, descriptors_scene );

//-- 3: Matching descriptor vectors using FLANN matcher
FlannBasedMatcher matcher;
std::vector< DMatch > matches;
matcher.match( descriptors_object, descriptors_scene, matches );

double max_dist = 0; double min_dist = 100;
//-- Quick calculation of max and min distances between keypoints
for( int i = 0; i < descriptors_object.rows; i++ )
{ 
    double dist = matches[i].distance;
    if( dist < min_dist ) 
        min_dist = dist;
    if( dist > max_dist )
        max_dist = dist;
}   

//-- Draw only "good" matches (i.e. whose distance is less than 3*min_dist )
std::vector< DMatch > good_matches;
for( int i = 0; i < descriptors_object.rows; i++ )
{ 
    if( matches[i].distance < 3*min_dist )
    { 
        good_matches.push_back( matches[i]); 
    }
}
Mat img_matches;
drawMatches( img_object, keypoints_object, img_scene, keypoints_scene,
    good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
    vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );

//Draw matched points
imwrite("c:\\temp\\Matched_Pints.png",img_matches);

//-- Localize the object
std::vector<Point2f> obj;
std::vector<Point2f> scene;
for( int i = 0; i < good_matches.size(); i++ )
{
    //-- Get the keypoints from the good matches
    obj.push_back( keypoints_object[ good_matches[i].queryIdx ].pt );
    scene.push_back( keypoints_scene[ good_matches[i].trainIdx ].pt );
}

Mat H = findHomography( obj, scene, CV_RANSAC );
//-- Get the corners from the image_1 ( the object to be "detected" )
std::vector<Point2f> obj_corners(4);
obj_corners[0] = cvPoint(0,0); obj_corners[1] = cvPoint( img_object.cols, 0 );
obj_corners[2] = cvPoint( img_object.cols, img_object.rows ); obj_corners[3] = cvPoint( 0, img_object.rows );
std::vector<Point2f> scene_corners(4);
perspectiveTransform( obj_corners, scene_corners, H);
//-- Draw lines between the corners (the mapped object in the scene - image_2 )
line( img_matches, scene_corners[0] + Point2f( img_object.cols, 0), scene_corners[1] + Point2f( img_object.cols, 0), Scalar(0, 255, 0), 4 );
line( img_matches, scene_corners[1] + Point2f( img_object.cols, 0), scene_corners[2] + Point2f( img_object.cols, 0), Scalar( 0, 255, 0), 4 );
line( img_matches, scene_corners[2] + Point2f( img_object.cols, 0), scene_corners[3] + Point2f( img_object.cols, 0), Scalar( 0, 255, 0), 4 );
line( img_matches, scene_corners[3] + Point2f( img_object.cols, 0), scene_corners[0] + Point2f( img_object.cols, 0), Scalar( 0, 255, 0), 4 );
//-- Show detected matches
//imshow( "Good Matches & Object detection", img_matches ); 
imwrite ...
(more)
edit retag flag offensive close merge delete

Comments

Perhaps I'm missing something, but why not just do warpPerspective with the WARP_INVERSE_MAP flag? Why do you need to decompose the homography matrix?

Tetragramm gravatar imageTetragramm ( 2016-11-28 12:05:34 -0600 )edit

Thanks @Tetragramm , i don't know that WARP_INVERSE_MAP will recover my original image , i will try it but how get the final object rectangle in the recovered image , which function will do calculate the rectangle.

i was trying to do the mathematics by my self , you saved my time

essamzaky gravatar imageessamzaky ( 2016-11-28 12:32:42 -0600 )edit

HI @Tetragramm , i have used warpPerspective with the WARP_INVERSE_MAP to get the recovered image after removing the rotation ,scale ,shear and transition , the rotation corrected but the image is shifted , also how i can calculate the recovered image size , i will add images to the question to understand what i mean

essamzaky gravatar imageessamzaky ( 2016-11-29 10:09:54 -0600 )edit

So, I'm not quite sure what you're trying to do, but I'm sure we can do it all without decomposing the homography.

The output image there looks like you successfully transformed from the scene to the model. If you made the output the same size as the model image, you would just have the scene box matching the model box, ready for denoising or whatever you're doing.

Then to put it back into your image, you just use warpPerspective again, but this time without the INVERSE_MAP flag, and with the BORDER_TRANSPARENT type to place it back where it originally came from.

Am I correct that that is what you wanted to do, just that small box? If you're trying to do something with the whole permit, we'll need to change it up a little.

Tetragramm gravatar imageTetragramm ( 2016-11-29 11:02:38 -0600 )edit

what i'm doing as follow

Input is Scaled and rotated image scene
Output required : Correct the scale and rotation and put the result in the recovered image , i need to get the coordinates of the object image inside the recovered image .
by knowing the object coordinates i will get all text coordinates by using relative relation between them, 
essamzaky gravatar imageessamzaky ( 2016-11-29 11:26:50 -0600 )edit

I had tried warpPerspective + BORDER_TRANSPARENT without INVERSE_MAP but the result is flipped in y axis . using warpPerspective with WARP_INVERSE_MAP gives near to correct result , but the starting point of the recovered image (top left corner) is object top left corner not the scene top left corner, my be we need to recalculate the Homography Matrix to change the top left corner?

essamzaky gravatar imageessamzaky ( 2016-11-29 12:12:46 -0600 )edit

2 answers

Sort by ยป oldest newest most voted
1

answered 2016-11-29 12:19:26 -0600

Tetragramm gravatar image

updated 2016-11-30 12:47:49 -0600

Ah, ok. Create a vector of Point2f, one located at each corner of your model, or each relevant location in your model. Then use the transform function to map those onto your document. The output should be a vector of Point2f that are those points on the document.

So if your input is the corners of the model, then your output is the corners of the box in the image.

If you need to do the inverse, it's just H.inv(), I think.

EDIT: I'm really not sure what you're doing wrong, because I don't have enough of your code, but here. This code does what I think you want done.

Mat warped;
warpPerspective(scene, warped, h, Size(model.cols, model.rows), WARP_INVERSE_MAP);
imshow("Warped", warped);

Mat modifiedScene = scene.clone();
warpPerspective(model, modifiedScene, h, Size(scene.cols, scene.rows), 1, BORDER_TRANSPARENT);
imshow("Modified Scene", modifiedScene);

vector<Point2f> ptsModel, ptsScene;
vector<Point3f> tempPts;
ptsModel.push_back(Point2f(0, 0));
ptsModel.push_back(Point2f(model.cols, 0));
ptsModel.push_back(Point2f(0, model.rows));
ptsModel.push_back(Point2f(model.cols, model.rows));
transform(ptsModel, tempPts, h);
convertPointsHomogeneous(tempPts, ptsScene);

for (int i = 0; i < ptsScene.size(); ++i)
    std::cout << ptsScene[i] << "\n";
std::cout << "\n";

line(scene, ptsScene[0], ptsScene[1], Scalar(255, 0, 0));
line(scene, ptsScene[1], ptsScene[3], Scalar(255, 0, 0));
line(scene, ptsScene[2], ptsScene[3], Scalar(255, 0, 0));
line(scene, ptsScene[2], ptsScene[0], Scalar(255, 0, 0));
imshow("Scene", scene);

waitKey();
edit flag offensive delete link more

Comments

still can not use warpPerspective with WARP_INVERSE_MAP to correct scale and rotation for the whole scene image only part of scene will be corrected, also i used

std::vector<point2f> Recovered_corners(4);
perspectiveTransform( scene_corners, Recovered_corners, H.inv());

Recovered_corners gives the standalone object coordinates not the object coordinates in the recovered scene it seems we need to decompose and adapt H matrix to the new recovered scene , i found the following paper which describe how to decompose Himograhpy matrix H How to decompose Homography i think it's complicated and need some time to understand and implement. now i will use another method to detect and fix the scale and rot

essamzaky gravatar imageessamzaky ( 2016-11-30 05:49:20 -0600 )edit

Hi @Tetragramm it seems there is little bit miss understand between us, i need to find the corrected scale and rotation for the whole scene , also i need to find rectangle of an object in the final fixed scene , i had updated the question and added image describe the steps and what is expected , also i added the code .

In your code i'm interested in warped image;

warpPerspective(scene, warped, h, Size(model.cols, model.rows), WARP_INVERSE_MAP);

but i need the whole scene to be warpped

then in the final warped image i need to find rectangle of the object see the updated question

essamzaky gravatar imageessamzaky ( 2016-12-01 06:46:08 -0600 )edit

If you can not see the text in the image which describe the problem and the expected solution use "Ctrl + " in your keyboard to zoom in the image

essamzaky gravatar imageessamzaky ( 2016-12-01 06:50:36 -0600 )edit

To get your desired final output, you were almost there. You just need to add an offset to your model. You can add it to the third column, first (x) and second (y) rows of the homography. The homography is the warp to the model, but your model is not actually where you want it, so you need to add the offset between where you want it and where it is.

Tetragramm gravatar imageTetragramm ( 2016-12-01 12:21:43 -0600 )edit
0

answered 2016-12-01 12:07:18 -0600

essamzaky gravatar image

updated 2016-12-01 12:11:37 -0600

Here i will explain method to decompose the transformation matrix H , as in the following two articles
Math ,code

here it's my trial code

//read the input image      
Mat img_object = imread( strObjectFile, CV_LOAD_IMAGE_GRAYSCALE );
Mat img_scene = imread( strSceneFile, CV_LOAD_IMAGE_GRAYSCALE );
Mat img_scene_color = imread( strSceneFile, CV_LOAD_IMAGE_COLOR );
if( img_scene.empty() || img_object.empty())
{
    return ERROR_READ_FILE;     
}   
//Step 1 Find the object in the scene and find H matrix
//-- 1: Detect the keypoints using SURF Detector
int minHessian = 400;
SurfFeatureDetector detector( minHessian );
std::vector<KeyPoint> keypoints_object, keypoints_scene;
detector.detect( img_object, keypoints_object );
detector.detect( img_scene, keypoints_scene );

//-- 2: Calculate descriptors (feature vectors)
SurfDescriptorExtractor extractor;
Mat descriptors_object, descriptors_scene;
extractor.compute( img_object, keypoints_object, descriptors_object );
extractor.compute( img_scene, keypoints_scene, descriptors_scene );

//-- 3: Matching descriptor vectors using FLANN matcher
FlannBasedMatcher matcher;
std::vector< DMatch > matches;
matcher.match( descriptors_object, descriptors_scene, matches );

double max_dist = 0; double min_dist = 100;
//-- Quick calculation of max and min distances between keypoints
for( int i = 0; i < descriptors_object.rows; i++ )
{ 
    double dist = matches[i].distance;
    if( dist < min_dist ) 
        min_dist = dist;
    if( dist > max_dist )
        max_dist = dist;
}   

//-- Draw only "good" matches (i.e. whose distance is less than 3*min_dist )
std::vector< DMatch > good_matches;
for( int i = 0; i < descriptors_object.rows; i++ )
{ 
    if( matches[i].distance < 3*min_dist )
    { 
        good_matches.push_back( matches[i]); 
    }
}
Mat img_matches;
drawMatches( img_object, keypoints_object, img_scene, keypoints_scene,
    good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
    vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );

//Draw matched points
imwrite("c:\\temp\\Matched_Pints.png",img_matches);

//-- Localize the object
std::vector<Point2f> obj;
std::vector<Point2f> scene;
for( int i = 0; i < good_matches.size(); i++ )
{
    //-- Get the keypoints from the good matches
    obj.push_back( keypoints_object[ good_matches[i].queryIdx ].pt );
    scene.push_back( keypoints_scene[ good_matches[i].trainIdx ].pt );
}

Mat H = findHomography( obj, scene, CV_RANSAC );
//-- Get the corners from the image_1 ( the object to be "detected" )
std::vector<Point2f> obj_corners(4);
obj_corners[0] = cvPoint(0,0); obj_corners[1] = cvPoint( img_object.cols, 0 );
obj_corners[2] = cvPoint( img_object.cols, img_object.rows ); obj_corners[3] = cvPoint( 0, img_object.rows );
std::vector<Point2f> scene_corners(4);
perspectiveTransform( obj_corners, scene_corners, H);
//-- Draw lines between the corners (the mapped object in the scene - image_2 )
line( img_matches, scene_corners[0] + Point2f( img_object.cols, 0), scene_corners[1] + Point2f( img_object.cols, 0), Scalar(0, 255, 0), 4 );
line( img_matches, scene_corners[1] + Point2f( img_object.cols, 0), scene_corners[2] + Point2f( img_object.cols, 0), Scalar( 0, 255, 0), 4 );
line( img_matches, scene_corners[2] + Point2f( img_object.cols, 0), scene_corners[3] + Point2f( img_object.cols, 0), Scalar( 0, 255, 0), 4 );
line( img_matches, scene_corners[3] + Point2f( img_object.cols, 0), scene_corners[0] + Point2f( img_object.cols, 0), Scalar( 0, 255, 0), 4 );
//-- Show detected matches
//imshow( "Good Matches & Object detection", img_matches ); 
imwrite("c:\\temp\\Object_detection_result.png",img_matches);

//Step 2 correct the scene scale and rotation and locate object in the recovered scene
Mat img_Recovered;
//1-decompose find the H matrix
float a = H.at<double>(0,0);
float b = H.at<double>(0,1);
float c = H.at<double>(0,2);
float d = H.at<double>(1,0);
float e = H.at<double>(1,1);
float f = H.at<double>(1,2);

float p = sqrt(a*a + b*b);
float r = (a ...
(more)
edit flag offensive delete link more

Question Tools

1 follower

Stats

Asked: 2016-11-28 11:47:11 -0600

Seen: 2,781 times

Last updated: Dec 01 '16