Triangulation - Combining Data from Multiple Cameras
I have a setup with 8 synchronized cameras and I am trying to perform some 3D reconstruction of keypoints on a persons body. I am wondering if there is a way for me improve my triangulation results from OpenCV by performing some sensor fusion or other technique.
Currently I am just pairing the cameras to create stereo pairs, calculating the 3D points with triangulatePoints (and other data like intrinsic/extrinsics from chessboard calibration), and then I’m left with multiple estimates (one estimate of the point from each pair) of the same keypoints that I can average out for example.
If anyone has any ideas or knows of any papers that may help with this it would be really appreciated!
isn't this structure from motion ?
From what I've seen of other SFM work is that people are generally using feature detectors/matchers like ORB/SIFT etc. in order to get the scene points and then use those points to perform bundle adjustment. Are there any examples of people using chessboard calibration for intrinsic/extrinsic parameters and then using SFM with the already obtained camera matrices/distortion coefficients and other parameters?