1 | initial version |
After debugging stuff for quite a while I figured out that there was an error in my workflow.
To modularize the different steps for better command line usage I split the calculation of the camera matrices from the calculation of the stereo calibration (which is good). What I missed though is that stereoCalibrate
modifies the camera and distortion coefficient matrices.
When I then read in the matrices for display, I was using the original matrices - which is plainly wrong.
2 | Updated with solution for other error sources. |
After debugging stuff for quite a while I figured out that there was an error in my workflow.
To modularize the different steps for better command line usage I split the calculation of the camera matrices from the calculation of the stereo calibration (which is good). What I missed though is that stereoCalibrate
modifies the camera and distortion coefficient matrices.
When I then read in the matrices for display, I was using the original matrices - which is plainly wrong.
Update: After changing my setup I was stuck with bad results a second time. In the original setup the camera views were aligned in about 1m distance to the camera. (Aligned means that both cameras capture roughly the same area) At this distance I was using the calibration target (a A4 paper with asymmetrical grid pattern).
I then changed the setup to have a larger capture area so the cameras were aligned in about 2.5m distance. Now the calibration target was pretty small regarding the distance to the camera. So I tried two things:
The first approach led to very high errors even for a single camera. The target just didn't occupy enough space in the image so the optimization was not going well. So I tried the second approach. This time the reported error was good but when I displayed the rectification, it was nonsense like before.
I then remembered a picture from the documentation which showed a guy holding a ridiculous large calibration target. After putting one and one together I figured out my error.
For stereoRectify()
it is crucially that the whole viewing field of both cameras is covered by a known pattern. Because of the separation of the cameras there is only one plane where there is maximum overlap of both fields of view. Therefore the calibration pattern has to cover a large enough area of the field of view at the plane of maximum overlap.
Hope this helps you - it took me too much time to figure it out.
3 | Added more tips. |
After debugging stuff for quite a while I figured out that there was an error in my workflow.
To modularize the different steps for better command line usage I split the calculation of the camera matrices from the calculation of the stereo calibration (which is good). What I missed though is that stereoCalibrate
modifies the camera and distortion coefficient matrices.
When I then read in the matrices for display, I was using the original matrices - which is plainly wrong.
Update: After changing my setup I was stuck with bad results a second time. In the original setup the camera views were aligned in about 1m distance to the camera. (Aligned means that both cameras capture roughly the same area) At this distance I was using the calibration target (a A4 paper with asymmetrical grid pattern).
I then changed the setup to have a larger capture area so the cameras were aligned in about 2.5m distance. Now the calibration target was pretty small regarding the distance to the camera. So I tried two things:
The first approach led to very high errors even for a single camera. The target just didn't occupy enough space in the image so the optimization was not going well. So I tried the second approach. This time the reported error was good but when I displayed the rectification, it was nonsense like before.
I then remembered a picture from the documentation which showed a guy holding a ridiculous large calibration target. After putting one and one together I figured out my error.
For stereoRectify()
it is crucially that the whole viewing field of both cameras is covered by a known pattern. Because of the separation of the cameras there is only one plane where there is maximum overlap of both fields of view. Therefore the calibration pattern has to cover a large enough area of the field of view at the plane of maximum overlap.
Update 2: when you specify the size of the grid don't forget that the grid size is 0.5 times the distance of the dots for the asymmetrical pattern. Use meters as unit for all values - it avoids confusion about the numbers.
Hope this helps you - it took me too much time to figure it out.
After debugging stuff for quite a while I figured out that there was an error in my workflow.
To modularize the different steps for better command line usage I split the calculation of the camera matrices from the calculation of the stereo calibration (which is good). What I missed though is that stereoCalibrate
modifies the camera and distortion coefficient matrices.
When I then read in the matrices for display, I was using the original matrices - which is plainly wrong.
Update: After changing my setup I was stuck with bad results a second time. In the original setup the camera views were aligned in about 1m distance to the camera. (Aligned means that both cameras capture roughly the same area) At this distance I was using the calibration target (a A4 paper with asymmetrical grid pattern).
I then changed the setup to have a larger capture area so the cameras were aligned in about 2.5m distance. Now the calibration target was pretty small regarding the distance to the camera. So I tried two things:
The first approach led to very high errors even for a single camera. The target just didn't occupy enough space in the image so the optimization was not going well. So I tried the second approach. This time the reported error was good but when I displayed the rectification, it was nonsense like before.
I then remembered a picture from the documentation which showed a guy holding a ridiculous large calibration target. After putting one and one together I figured out my error.
For stereoRectify()
it is crucially crucial that the whole viewing field of both cameras is covered by a known pattern. Because of the separation of the cameras there is only one plane where there is maximum overlap of both fields of view. Therefore the calibration pattern has to cover a large enough area of the field of view at the plane of maximum overlap.
Update 2: when you specify the size of the grid don't forget that the grid size is 0.5 times the distance of the dots for the asymmetrical pattern. Use meters as unit for all values - it avoids confusion about the numbers.
Hope this helps you - it took me too much time to figure it out.