Hi all,
I have a project for Android with which I take an image from the main camera as well as the "depth camera". The only problem is that the actual sensors are placed few centimeters away from each other, thus producing a result as seen below
In order to map each pixel of the depth image to the actual image, I found the following method on android developers forum:
To transform a pixel coordinates between two cameras facing the same direction, first the source camera CameraCharacteristics#LENS_DISTORTION must be corrected for. Then the source camera CameraCharacteristics#LENS_INTRINSIC_CALIBRATION needs to be applied, followed by the CameraCharacteristics#LENS_POSE_ROTATION of the source camera, the translation of the source camera relative to the destination camera, the CameraCharacteristics#LENS_POSE_ROTATION of the destination camera, and finally the inverse of CameraCharacteristics#LENS_INTRINSIC_CALIBRATION of the destination camera. This obtains a radial-distortion-free coordinate in the destination camera pixel coordinates.
but I am not quite sure how to approach this with opencv.