I have two views of an experiment. I would like to map points in the left image to predicted locations in the right image.
so far, I have done the following:
Calibrated both cameras with cv2.calibrateCamera. The first camera obtains retval1, cameraMatrix1, distCoeffs1, rvecs1, tvecs1; the second camera obtains retval2, cameraMatrix2, distCoeffs2, rvecs2, tvecs2.
Calibrated the stereo system with cv2.stereoCalibrate. This obtains ret, cameraMatrix1, distCoeffs1, cameraMatrix2, distCoeffs2, R, T,E , F.
Computed the rectification transforms for each camera with cv2.stereoRectify. This obtains R1,R2,P1,P2,Q,validPixROI1,validPixROI2.
Now I would like to perform two tasks.
Given image points of the same feature in the left and right views, I would like to obtain the 3D world coordinates of this feature.
Given image points of a feature in the left (right) view, I would like to obtain image points of the same feature in the left (right) view.
I gather that the first task involves cv2.triangulatePoints(), although I haven't successfully performed this task yet. I have not yet learned how to perform the second task. Any advice is appreciated !