Dear fellow opencv users and developers, I have been trying for quite some time to figure out how to transform my phase map to 3D.
A little explanation: Opencv has a nice stereo vision workflow, stereo calibration, acquire images, rectify images, calculate disparity via some correspondence technique, use reprojectImageTo3D to get the 3D point cloud.
I'm replacing one of the cameras with a projector for dense correspondence calculation. I'm able to stereo calibrate the camera/projector stereo pair, then I project vertical line patterns while the camera takes pictures of those patterns and calculate the projector column to camera pixel mapping. So here I only have projector column mapping and not column and row mapping. The mapping ends up looking like a disparity image but its not a disparity. The problem is that during the regular camera/camera disparity calculation both camera images are rectified, so all distortion is taken out (amongst other things happening with lining up epipoles). Then the correspondence algorithm searches both rectified images row by row (as the epipoles are aligned) and calculates the disparity.
In my situation I can't rectify the images, because there is only one image from the camera. But I can find the mapping of the projector columns to the camera pixels. So I can undistort this phase map image which will account for the camera distortion, but don't know how to undistort the projector side of the equation.
My question. Does opencv have some function or is there some group of functions that I can use to transform my projector pixel colums mapped to my camera pixels to 3D? I want to account for all distortion. Again here I project only vertical lines which gives me a 1D projector column (y) pixels to camera (x,y) pixels.
Thanks for any help or advice.
nmm02003