Utilise known extrinsic parameters when stitching panoramas
Dear OpenCV Community,
I am currently designing a mobile 360° panorama stitching app using OpenCV.
Since a 360° panorama needs a lot of source images (I use 62 at the moment), the adjustment (especially finding the extrinsic parameters) of the images is quite slow. Luckily, I can utilize orientation data derived from the smartphone's sensors to calculate the extrinsic camera parameters for each image. This way, I do not need to detect and match features at all.
However, those parameters are subject to slight drift. This means that a few images are slightly displaced in the result:
Is it possible to optimize those parameters I already know, by, for example, matching image features? I'm thinking here about only matching adjacent images to gain some performance, but I have no idea how this fits into the stitching pipeline.
TL/DR: I already know extrinsic camera parameters, but I would like to optimize them based on image features for a better result.