I'm working on 3-D tracking code, and i decided to use openCV functions to implement the algorithm.
1- Intrinsic Parameters:
I have two cameras and i computed the intrinsic parameters of each one using chessboard to get "_camMatrix"
2- Extrinsic Parameters:
I decided to use "solvePnP" to calculate the extrinsic parameters, So i defined 6 3D points in "_pts3d" vector: [(-1, 1, 0),(0, 1, 0),(1, 1, 0),(1,-1, 0),(0,-1, 0),(-1,-1, 0)] and for 6 times i get a new frame and push the 2D point to "_pts2d" vector, and finally calculate the "_rvec" and "_tvec" of the camera.
cv::solvePnP(_pts3d, _pts2d, _camMatrix, _disCoeff, _rvec, _tvec, false, cv::SOLVEPNP_ITERATIVE);
3- Build the Projection Matrix:
cv::Mat rotM(3, 3, CV_64F);
Rodrigues(_rvec, rotM);
// push tvec to transposed Mat
cv::Mat rotMT = rotM.t();
rotMT.push_back(_tvec.reshape(1, 1));
// transpose back, and multiply
cv::Mat _projMat = _camMatrix * rotMT.t();
At this point i have two projection matrices for each camera P1 and P2, Now it's the time for triangulation: for infinity i get a new 2D point from each camera "pointsMat1" and "pointsMat1" then triangulate them into "pnt3D":
cv::triangulatePoints(P1, P2, pointsMat1, pointsMat2, pnt3D);
then calculate X,Y,Z :
X = pnt3D.at<double>(0,0) / pnt3D.at<double>(3,0);
Y = pnt3D.at<double>(1,0) / pnt3D.at<double>(3,0);
Z = pnt3D.at<double>(2,0) / pnt3D.at<double>(3,0);
Finally, i'm sending those reading to my graphics engine in openGL but there is a big error in the X,Y,Z, is there anything wrong in my calculations ?