1 | initial version |
Hello,
Yes you can use cv::trinagulatePoints(). Anyway to use cv::triangulatePoints(), you require the following:
Camera Matrix 1/2 distortion Coefficients 1/2 rotation Vector 1/2 translation Vector 1/2
The typicall procedure is as following:
1) Get camera calibration parameters
2) Undistort your points of interrest (use: cv::undistortPoints())
Store all points in Coords1/2. The undistortet points will be returned in UndistCoords1/2
vector<cv::vec2d> Coords1, UndistCoords1; vector<cv::vec2d> Coords2, UndistCoords2; cv::undistortPoints(Coords1, undistCoords1, camMat1, distCoeff1, cv::noArray(), camMat1); cv::undistortPoints(Coords2, undistCoords2, camMat2, distCoeff2, cv::noArray(), camMat2);
3) Compute projection matrix for camera 1/2
Before we can tringulate the undistortet points we have to compute the projection Matrix for each camera. This will be done as following:
cv::Mat computeProjMat(cv::Mat camMat, vector<cv::mat> rotVec, vector<cv::mat> transVec) { cv::Mat rotMat(3, 3, CV_64F), RTMat(3, 4, CV_64F); //1. Convert rotation vector into rotation matrix cv::Rodrigues(rotVec.at(0), rotMat); //2. Append translation vector to rotation matrix cv::hconcat(rotMat, transVec.at(0), RTMat); //3. Compute projection matrix by multiplying intrinsic parameter //matrix (A) with 3 x 4 rotation and translation pose matrix (RT). return (camMat * RTMat); }
!!! Do this for both cameras !!!
4) Triangulate your points
cv::Mat triangCoords4D; cv::triangulatePoints(projMat1, projMat2, undistCoords1, undistCoords2, triangCoords4D);
!!! Be aware the output point are 4D not 3D !!!
To get 3D coordinates you can do the following:
cv::Vec4d triangCoords1 = triangCoords4D.col(0); cv::Vec4D trinagCoords2 = triangCoords4D.col(1);
cv::Vec3d Coords13D, Coords23D; for (unsigned int i = 0; i < 3; i++) { Coords13D[i] = triangCoords1[i] / triangCoords1[3]; Coords23D[i] = triangCoords2[i] / triangCoords2[3]; }
Hope this helps.
2 | No.2 Revision |
Hello,
Yes you can use cv::trinagulatePoints(). Anyway to use cv::triangulatePoints(), you require the following:
The typicall procedure is as following:
1) Get camera calibration parameters
2) Undistort your points of interrest (use: cv::undistortPoints())
Store all points in Coords1/2. The undistortet points will be returned in UndistCoords1/2
vector<cv::vec2d>
vector<cv::Vec2d>
Coords1, UndistCoords1;
3) Compute projection matrix for camera 1/2
Before we can tringulate the undistortet points we have to compute the projection Matrix for each camera. This will be done as following:
cv::Mat computeProjMat(cv::Mat camMat, !!! Do this for both cameras !!!
4) Triangulate your points
cv::Mat triangCoords4D;
cv::triangulatePoints(projMat1, projMat2, undistCoords1, undistCoords2, !!! Be aware the output point are 4D not 3D !!!
To get 3D coordinates you can do the following:
cv::Vec4d triangCoords1 = triangCoords4D.col(0);
cv::Vec4D trinagCoords2 = Hope this helps.
3 | No.3 Revision |
Hello,
Yes you can use cv::trinagulatePoints(). Anyway to use cv::triangulatePoints(), you require the following:
The typicall procedure is as following:
1) Get camera calibration parameters
Descriped like here
2) Undistort your points of interrest (use: cv::undistortPoints())
Store all points in Coords1/2. The undistortet points will be returned in UndistCoords1/2
vector<cv::Vec2d> Coords1, UndistCoords1;
vector<cv::Vec2d> Coords2, UndistCoords2;
cv::undistortPoints(Coords1, undistCoords1, camMat1, distCoeff1, cv::noArray(), camMat1);
cv::undistortPoints(Coords2, undistCoords2, camMat2, distCoeff2, cv::noArray(), camMat2);
3) Compute projection matrix for camera 1/2
Before we can tringulate the undistortet points we have to compute the projection Matrix for each camera. This will be done as following:
cv::Mat computeProjMat(cv::Mat camMat, vector<cv::Mat> rotVec, vector<cv::Mat> transVec)
{
cv::Mat rotMat(3, 3, CV_64F), RTMat(3, 4, CV_64F);
//1. Convert rotation vector into rotation matrix
cv::Rodrigues(rotVec.at(0), rotMat);
//2. Append translation vector to rotation matrix
cv::hconcat(rotMat, transVec.at(0), RTMat);
//3. Compute projection matrix by multiplying intrinsic parameter
//matrix (A) with 3 x 4 rotation and translation pose matrix (RT).
return (camMat * RTMat);
}
!!! Do this for both cameras !!!
4) Triangulate your points
cv::Mat triangCoords4D;
cv::triangulatePoints(projMat1, projMat2, undistCoords1, undistCoords2, triangCoords4D);
!!! Be aware the output point are 4D not 3D !!!
To get 3D coordinates you can do the following:
cv::Vec4d triangCoords1 = triangCoords4D.col(0);
cv::Vec4D trinagCoords2 = triangCoords4D.col(1);
cv::Vec3d Coords13D, Coords23D;
for (unsigned int i = 0; i < 3; i++) {
Coords13D[i] = triangCoords1[i] / triangCoords1[3];
Coords23D[i] = triangCoords2[i] / triangCoords2[3];
}
Hope this helps.
4 | No.4 Revision |
Hello,
Yes you can use cv::trinagulatePoints(). Anyway to use cv::triangulatePoints(), you require the following:
The typicall procedure is as following:
1) Get camera calibration parameters
Descriped like here
2) Undistort your points of interrest (use: cv::undistortPoints())
Store all points in Coords1/2. The undistortet points will be returned in UndistCoords1/2
vector<cv::Vec2d> Coords1, UndistCoords1;
vector<cv::Vec2d> Coords2, UndistCoords2;
cv::undistortPoints(Coords1, undistCoords1, camMat1, distCoeff1, cv::noArray(), camMat1);
cv::undistortPoints(Coords2, undistCoords2, camMat2, distCoeff2, cv::noArray(), camMat2);
3) Compute projection matrix for camera 1/2 1/2
Before we can tringulate the undistortet points we have to compute the projection Matrix for each camera. This will be done as following:
cv::Mat computeProjMat(cv::Mat camMat, vector<cv::Mat> rotVec, vector<cv::Mat> transVec)
{
cv::Mat rotMat(3, 3, CV_64F), RTMat(3, 4, CV_64F);
//1. Convert rotation vector into rotation matrix
cv::Rodrigues(rotVec.at(0), rotMat);
//2. Append translation vector to rotation matrix
cv::hconcat(rotMat, transVec.at(0), RTMat);
//3. Compute projection matrix by multiplying intrinsic parameter
//matrix (A) with 3 x 4 rotation and translation pose matrix (RT).
return (camMat * RTMat);
}
!!! Do this for both cameras !!!
4) Triangulate your points
cv::Mat triangCoords4D;
cv::triangulatePoints(projMat1, projMat2, undistCoords1, undistCoords2, triangCoords4D);
!!! Be aware the output point points are 4D not 3D !!!
To get 3D coordinates you can do the following:
cv::Vec4d triangCoords1 = triangCoords4D.col(0);
cv::Vec4D trinagCoords2 = triangCoords4D.col(1);
cv::Vec3d Coords13D, Coords23D;
for (unsigned int i = 0; i < 3; i++) {
Coords13D[i] = triangCoords1[i] / triangCoords1[3];
Coords23D[i] = triangCoords2[i] / triangCoords2[3];
}
Hope this helps.