Camera calibration estimates incorrect field of view
Using the OpenCV tutorial code for camera calibration I get a field of view which is bigger than it should be.
The field of view measured physically is 61.3 deg. For calibration pattern images taken in the same setup cv::calibrateCameraRO() returns a camera matrix which corresponds to 57.4 deg for the horizontal field of view (cropped image). cv::initUndistortRectifyMap() with alpha set to 0 (crop to valid image pixels) returns a camera matrix corresponding to 62.6 deg for the horizontal field of view.
The setup for the calibration is as follows:
- industrial cameras with chip size 1/1.8" and image resolution 2048x1536
- 11x8 asymmetric circles pattern printed on an aluminum sandwich panel (DIN A0), circle diameter 60mm, inter-circle distance 120mm
- 15 images of the calibration pattern taken in differents poses spread over the entire field of view (a selection from hundredts of pictures), good lighting conditions
- rational calibration model tested with 3, 5, and 8 distortion coefficients
I used asymmetric cicrles pattern because its detection quality seems to be better in varying lighting conditions and less dependent on image sharpness and chromatic aberrations.
The undistortion results with different sets of images are consistent and plausible. All lines are straight and there are no warping artifacts in undistored images.
The actual camera field of view has been verified in a test rig as well as in overlays with 3D point clouds. Using different image sets does not lead to better FOV estimates.
So my principal questions are:
- Is the camera calibration supposed to return a camera matrix with a more or less exact field of view?
- What may cause a bigger estimation of the field of view than the one physically measured?
- How can I improve the automatic field of view estimation?
Experiments with a checkerboard pattern printed on an aluminum sandwitch panel have shown that OpenCV determines a field of view smaller than it actually is for the same camera setup. The only difference is the size of board and pattern:
cv::calibrateCameraRO() returns a camera matrix which corresponds to 54.1 deg HFOV. cv::initUndistortRectifyMap() with alpha set to 0 returns a camera matrix corresponding to 55.3 deg HFOV.
Is OpenCV sensitive to board size or grid size, or both?
BTW Matlab computer vision module returns an estimate very close to the actual FOV value for the same checkerboard board.