How do I estimate camera pose from two sets of features in openCV?
I have a sequence of images. I would like to estimate the camera pose for each frame in my sequence.
I have calculated features in each frame, and tracked them through the sequence. I would now like to estimate the camera pose for each frame. I do so using the following openCV routines:
Mat essentialMatrix = findEssentialMat(pointsA, pointsB, f, pp, RANSAC, 0.999, 1.0, mask); recoverPose(essentialMatrix, pointsA, pointsB, R, T, focalLength, principlePoint); Where pointsA and pointsB are comprised of 2D coordinates of features that are present in both frames. pointsA is associated with the frame before pointsB.
The problem I am encountering is that the R and T estimations are very noisy, to the point where I believe something is wrong with my pose estimation.
My question is, how do I estimate the camera pose from two sets of features?
Note: I am familiar with this answered question. However, I believe openCV3 now includes methods that address this problem more eloquently.