I have an well-calibrated stereo camera. I did an experiment where in the first round I compute point cloud directly from the non-rectified images (I did it manually for small set of points that were marked as matched by human).
Then, I did the same by using the rectified images on the same points (Of-course I choose them from the rectified image not the original).
However, the output point clouds were no matched! There were huge translation and some rotation between them.
My question is: is this an expected behavior? In other words, should I apply some further steps on one of them to transform it to the same origin of the other one?
I did all the work in OpenCV 3.3.1 using mostly example from the repository.
P.S. I did not show any code because I just want to make sure that it is not an expected behavior before I start questioning my implementation.