Ask Your Question

Revision history [back]

How to rectify an image based to the intrinsics of another camera

Based on the documentation of stereo-rectify, you can rectify an image based on two camera matrices, their distortion coefficients, and a rotation-translation of one camera to another.

I would like to rectify an image I took using my own camera to the stereo setup from the KITTI dataset. From their calibration files, I know the camera matrix and size of images before rectification. From this PNG, I know the position of each of their cameras relative to the front wheels of the car and relative to ground.

I can also do a monocular calibration on my camera and get a camera matrix and distortion coefficients.

I am having trouble coming up with the rotation and translation matrix/vector between the coordinate systems of the first and the second cameras, i.e. from their camera to mine.

I positioned my camera on top of my car at almost exactly the same height and almost exactly the same distance from the center of the front wheels, as shown in the PNG.

However now I am at a loss as to how I can create the joint rotation-translation matrix. In a normal stereo-calibrate, these are returned by the setereoCalibrate function.

I looked at some references about coordinate transformation but I dont have sufficient practice in them to figure it out on my own.

Any help would be highly appreciated.