calibrateCamera - is it possible to constrain the Principal Point to be inside the image size?
I’m making some projection mapping software in Unity3D. I’m trying to calibrate a camera in Unity with OpenCV, using the OpenCVSharp .NET wrapper. It’s probably not essential to the question, but in case it’s helpful I’m following a similar technique to that of mapamok and my code is on GitHub here.
Essentially I’m using a single view of a 3D calibration target. This gives me a set of 3D world coordinates and their 2D projection on the image plane. I know a fairly good intrinsic matrix, so I can use solvePnP to get a decent rotation and translation for the camera. This works okay.
However, the intrinsic matrix is only okay and I’d like to use calibrateCamera to optimise these parameters. But when I use calibrateCamera, the intrinsic matrix is radically changed from the initial guess. Probably the intrinsic and extrinsic matrices when taken together provide the best solution for the correspondences, but it does create strange results e.g. negative principal points. As I’m using Unity and not OpenGL, I’ll be applying the resulting intrinsic matrix to a Unity camera. This makes applying results like negative principal points quite difficult.
I know I can fix parameters using flags, e.g. CV_CALIB_FIX_PRINCIPAL_POINT, but this would mean there’s no optimisation of these parameters at all. Is there any way to constrain the principal point to be inside the image? Or maybe to tell OpenCV that the intrinsic estimate is pretty good and shouldn’t be changed much?
I’m open to any suggestions – maybe I shouldn’t be getting a negative principal point at all and my point correspondences need to be better?
How large is your reprojection error after calibrating the camera for your feature points? A good error should be well below 1px if yours is much larger, your correspondences or 3dPoints in the model are wrong. (Or you have a camera with a strange distortion (e.g. very fisheye and you don't use the fisheye distortion model))