Hello,
I'm using OpenCV to set up a stereo odometry module. The whole module is written in Java but i'm using OpenCV to detect tracks on successive stereo couple. Here is my algorithm:
Rectify images using stereo calibration parameters
Detect matches between previous stereo left and current stereo left image using optical flow
Compute disparity between current stereo left and current stereo right image
Compute 3D position of the 2D points of the current stereo left images that have been matched with previous stereo left image
Project computed 3D points to current right images.
After step 4, i should have 2D points on current stereo left and current stereo right image.
The step 3 compute 3D points using this code:
01 vector<point3d> point3dInput; 02 point3dInput.reserve(stereoLeftPreviousImageFeatureLocations->size());
03 for(int i = 0; i < stereoLeftPreviousImageFeatureLocations->size(); i++){ 04 point3dInput.push_back(Point3d((stereoLeftFeatureLocations)[i].x, 05 (stereoLeftFeatureLocations)[i].y, 06 disp.at<short>((stereoLeftFeatureLocations)[i].y, 07 (stereoLeftFeatureLocations)[i].x))); 08 } 09 10 vector<cv::point3d> points3d; 11 Point3f point3d; 12 13 perspectiveTransform(point3dInput, points3d, stereoRectQ);
where: + stereoLeftFeatureLocations is a pointer to a vector of Point2f that represents 2d points within the stereo left image and obtained with optical flow computation + disp is the disparity computed between current stereo left and right rectified images + stereoRectQ is the Q matrix given by rectification parameters computation.
Obtained 3D points seems valid at this time.
During the step 4, i project the 3D points onto the current stereo right image using this code:
cv::Mat decomposedCameraMatrix; cv::Mat decomposedRotMatrix; cv::Mat decomposedRotVector; cv::Mat decomposedTransVector; cv::Mat decomposedTransVectorEuclidian = cv::Mat(3, 1, CV_64F);
// Decompose stereo right projection matrix cv::decomposeProjectionMatrix(getStereoRightProjectionMatrix(), decomposedCameraMatrix, decomposedRotMatrix, decomposedTransVector);
// From rotation matrix to rotation vector cv::Rodrigues(decomposedRotMatrix, decomposedRotVector);
// From homogeneous to euclidian translation vector decomposedTransVectorEuclidian.at<double>(0, 0) = decomposedTransVector.at<double>(0, 0) / decomposedTransVector.at<double>(0, 3); decomposedTransVectorEuclidian.at<double>(1, 0) = decomposedTransVector.at<double>(0, 1) / decomposedTransVector.at<double>(0, 3); decomposedTransVectorEuclidian.at<double>(2, 0) = decomposedTransVector.at<double>(0, 2) / decomposedTransVector.at<double>(0, 3);
cv::projectPoints(points3d, decomposedRotVector, decomposedTransVectorEuclidian , decomposedCameraMatrix, distortion, rightProjectedPoints);
My problem is that it seems that the projected points lie at the same coordinates on left and right images (seems that translation between images is not applied)
Could anyone can help me ? Maybe my method for stereo matching is bad but a have no other ideas at this time,
thanks.
Julien