Ask Your Question
1

Compute fundamental matrix from camera calibration

asked 2016-12-13 11:34:36 -0600

Ice_T02 gravatar image

updated 2016-12-15 08:01:12 -0600

Hello,

I try to compute the fundamental matrix given the following camera calibration parameters:

  • camera Matrix 1/2 (camMat1, camMat2)
  • rotation Vector 1/2 (rotVec1, rotVec2)
  • translation Vector 1/2 (transVec1, transVec2)

According to the following formula the fundamental matrix F is computed by:

F = inverse(transpose(camMat1)) * R * S * inverse(camMat2)

Anyway, i am quite a bit lost how to compute R and S. I know that R is the rotation matrix which brings image 1 into image 2. Also i know that S is the translation vector to transform image 1 into image 2. My plan would be:

1) Rodrigues both rotation Vectors and substract rotation Matrix 1 from rotation Matrix 2

cv::Mat rotMat1, rotMat2;
cv::Rodrigues(rotVec1[0], rotMat1);
cv::Rodrigues(rotVec2[0], rotMat2);
cv::Mat R = rotMat2 - rotMat1;

2) Substract translation Vector 1 from translation Vector 2

T = transVec2 - transVec1

3) Compose S

S = [0,-T[3], T[2]; T[3], 0, -T[1]; -T[2], T[1], 0];

Would this be correct? Any help on this topic would be appreciated. I hope this is not offtopic since it is not directly related to OpenCV.

Edit: I worked in the solution you provided. I cross checked it with cv::stereoCalib(). Unfortunatly the matrizes does not match. Any suggestions what i did wrong?

cv::Mat computeFundMat(cv::Mat camMat1, cv::Mat camMat2, vector<cv::Mat> rotVec1, 
    vector<cv::Mat> rotVec2, vector<cv::Mat> transVec1, vector<cv::Mat> transVec2)
{
    cv::Mat rotMat1(3, 3, CV_64F), rotMat2(3, 3, CV_64F);
    cv::Mat transVec2toWorld(3, 1, CV_64F);
    cv::Mat R(3, 3, CV_64F), T(3, 1, CV_64F), S(cv::Mat::zeros(3, 3, CV_64F));
    cv::Mat F(3, 3, CV_64F);
    //Convert rotation vector into rotation matrix 
    cv::Rodrigues(rotVec1.at(0), rotMat1);
    cv::Rodrigues(rotVec2.at(0), rotMat2);
    //Transform prameters of camera 2 into world
    rotMat2 = rotMat2.t();
    transVec2toWorld = (-rotMat2 * transVec2.at(0));
    //Compute R, T to rotate, translate image b into image a
    R = rotMat2 * rotMat1; //or --> rotMat2 * rotMat1 ???
    T = transVec1.at(0) + transVec2toWorld;
    //Compose skew matrix (Format: S = 
    // [0,-T[3], T[2];   -> 1
    S.at<double>(0, 1) = -T.at<double>(2); S.at<double>(0, 2) = T.at<double>(1); //-> 1
    //  T[3], 0, -T[1];  -> 2
    S.at<double>(1, 0) = T.at<double>(2); S.at<double>(1, 2) = -T.at<double>(0); //-> 2
    // -T[2], T[1], 0];) -> 3
    S.at<double>(2, 0) = -T.at<double>(1); S.at<double>(2, 1) = T.at<double>(0); //-> 3
    //Compute fundamental matrix (F = inverse(transpose(camMat2)) * R * S * inverse(camMat1))
    return cv::Mat(camMat2.t().inv() * R * S * camMat1.inv());
}
edit retag flag offensive close merge delete

2 answers

Sort by ยป oldest newest most voted
2

answered 2017-06-14 03:51:42 -0600

Ice_T02 gravatar image

updated 2017-06-14 03:52:49 -0600

I had to suspend this part for a while but i came up with the following solution:

1) Compute Projection Matrix for each camera

cv::Mat rotMat(3, 3, CV_64F), rotTransMat(3, 4, CV_64F);
//Convert rotation vector into rotation matrix 
cv::Rodrigues(rotVec.at(0), rotMat);
//Append translation vector to rotation matrix
cv::hconcat(rotMat, transVec.at(0), rotTransMat);
//Compute projection matrix by multiplying intrinsic parameter 
//matrix (A) with 3 x 4 rotation and translation pose matrix (RT).
//Formula: Projection Matrix = A * RT;
return (camMat * rotTransMat);

2) Out of the 2 projection matrizes we can compute the fundamental matrix (P1/P2 are the earlier calculated projection marizes)

cv::Mat_<double> X[3];
cv::vconcat(P1.row(1), P1.row(2), X[0]);
cv::vconcat(P1.row(2), P1.row(0), X[1]);
cv::vconcat(P1.row(0), P1.row(1), X[2]);

cv::Mat_<double> Y[3];
cv::vconcat(P2.row(1), P2.row(2), Y[0]);
cv::vconcat(P2.row(2), P2.row(0), Y[1]);
cv::vconcat(P2.row(0), P2.row(1), Y[2]);

cv::Mat_<double> XY;
for (unsigned int i = 0; i < 3; ++i)
{
    for (unsigned int j = 0; j < 3; ++j)
    {
        cv::vconcat(X[j], Y[i], XY);
        F.at<double>(i, j) = cv::determinant(XY);
    }
}

Solution is taken from here. Maybe this is helpfull for ppl coming from google.

edit flag offensive delete link more
1

answered 2016-12-13 18:30:13 -0600

Tetragramm gravatar image

Not quite. First, rotation matrices don't work that way, and second, coordinate systems.

There is more information on rotation matrices HERE, but to start there are three things you need to know.

  1. You can invert a rotation matrix by transposing it (rotMat.t())
  2. You combine rotation matrices by matrix multiplying them (the * operator, ex rotMat1*rotMat2)
  3. The rotations and translations you get from calibration transform world points to camera relative points.

Bearing number 3 in mind, if you combine the two rotation matrices together directly, you would be transforming from world to camera1, then from world to camera2. This is wrong because after the first, the point isn't a world point any more.

You pick one of the cameras to be the origin (use camera1) and change the rotation and translation of the other to be the camera relative to the world. If you want to really understand what's going on, you should derive this for yourself, but that's always easier if you know what you're going for, so here's code to reverse the rotation and translation vector and get the transformation from camera to world.

Rodrigues(rvec, cameraRotation); //3x1 to 3x3
cameraRotation = cameraRotation.t(); //Invert the rotation matrix
cameraTranslation = (-cameraRotation * tvec); //Appropriately transform the translation vector

So, now you have camera2->world and world->camera1. You combine them by multiplying the rotation matrices (I always forget which order, I think it's rotMat1*rotMat2, but try both to be sure). And combing translation is as easy as transVec1+transVec2.

Now you have an R and T from camera2->camera1, and you have what you need.

Or almost. Where did you get your original rvec and tvecs? If you used camera calibration, did you use a static scene? Was the calibration target in exactly the same physical location for both calculations? Since world points are defined as places on the calibration target, if you moved the target, you moved your world points, and you actually have two, unknown world frames. If necessary, take a new picture of an unmoving scene and use solvePnP to get your rvecs and tvecs.

edit flag offensive delete link more

Comments

Thank you for providing a solution i am currcently investigating. I use OpenCV camera calibration (single camera calibration). Both cameras are prependicular to another and remain static during calibration. Only the grid will be moved in the scene. Anyway to give you a bigger picture of what i am trying to achieve. I have written a circle/ellipse detection algorithm to find 2 spheres in the camera images (2 per Image). Sometimes it may occur that i have a false positive in either image 1 or 2. To elimnate these i tought about using epipolar geometry and check if there is a circle on the same location in the corresponding image. I am currently not sure if this approach is correct or can be used. EDIT: see original post

Ice_T02 gravatar imageIce_T02 ( 2016-12-15 04:18:36 -0600 )edit

To be honest, I have no idea if your formula for the fundamental matrix is correct. If you're just looking to use it, and it seems you are, find it with the function findFundamentalMat.

Take an image of a static scene (say a chessboard) and find the corners in both images. That function will give you the fundamental matrix. I don't know if it will work at 90 degrees though. Without depth, you can only be sure if they match in the vertical direction, not left/right in either camera.

Tetragramm gravatar imageTetragramm ( 2016-12-15 20:01:52 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2016-12-13 11:34:36 -0600

Seen: 7,399 times

Last updated: Jun 14 '17