Hi all,
to let me put everything in context, let me describe what I am doing/trying to do. I have a multiple camera setup which I calibrate using OpenCV. The images obtained from the cameras contain a bunch of bright spots which I use to triangulate their real world coordinates. This has been working flawlessly so far. To improve the real world coordinate accuracy, I use the resulting point cloud and the positions of the peaks in the images, to refine the calibration of each camera. This is also working well. Afterwards, I repeat the undistortion of the raw peak positions in the images so that the undistorted peak list reflects the changed calibration parameters. In randomly occuring cases after recalibrating the cameras, however, I am left with absolutely unusably undistorted peak positions that exhibit values of the order of 1.0E+28 and above. Usually, the undistored peak positions should be in the range of +-1 or so. So far, I have only been able to trace this behavior back to the undistortPoints call. My suspicion is that the calibration uses a not-quite-robust-enough optimizer so that a large number of world-image-coordinate pairs (approx. 100k) with some degree of uncertainty can cause it to diverge.
Has anyone else seen this kind of behavior and can suggest some countermeasures?
Thanks for your input and all the best, Alex