Hello there,
I have a project in which I am trying to create a real-time stitched panorama from three webcams mounted as shown here: http://i1276.photobucket.com/albums/y475/sesj13/IMG_2959_zpsb710bae8.jpg
The idea is to stitch together the frames captured by the side cameras with the frame captured by the central camera to create a "seamless" video panorama. I have so far hacked together some code that uses SURF to find the keypoints and corresponding descriptors that lie within the overlapping regions, as shown here: http://i1276.photobucket.com/albums/y475/sesj13/ROI_zpsffce8c7d.png
The keypoints are then matched and filtered. A video demonstration of this can be seen here: http://s1276.beta.photobucket.com/user/sesj13/media/SURF_zpsa47b8d56.mp4.html
The resulting "good" matches are then used to compute the corresponding homography matrices. The two homography matrices are then used warp the perspective of the frames captured from the side images. The problem is that the homography matrices change quite dramatically every loop. Also, the findHomography() function (using RANSAC) also appears to slow the frame rate quite dramatically from 30fps to 1-2fps. I have just been using the default settings thus far, so I'm sure it could be speeded up.
I am wondering if anyone knows of a different/better approach to this problem? I have read somewhere that if the cameras are in a fixed position relative to each other then you should only need to calculate the homography matrices once (possibly using stereo calibration). If anyone has any general pointers or input it would be much appreciated.