Ask Your Question
0

Real-time video stitching from initial stitching computation

asked 2016-08-13 01:06:03 -0600

gregoireg gravatar image

updated 2016-08-15 00:25:57 -0600

I have two fixed identical synchronized camera streams at 90 degrees of each other. I would like to stitch in real-time those two streams into a single stream.

After getting the first frames on each side, I perform a full OpenCV stitching and I'm very satisfied of the result.

imgs.push_back(imgCameraLeft);
imgs.push_back(imgCameraRight);
Stitcher stitcher = Stitcher::createDefault(false);
stitcher.stitch(imgs, pano);

I would like to continue the stitching on the video stream by reapplying the same parameters and avoid recalculation (especially of the homography and so on...)

How can I get maximum data from the stitcher class from the initial computation such as: - the homography and the rotation matrix applied to each side - the zone in each frame that will be blended

I'm OK to keep the same settings and apply them for the stream as real-time performance is more important than precision of the stitching. When I say "apply them", I mean I want to apply the same transformation and blending either in OpenCV or with a direct GPU implementation.

The cameras don't move, have the same exposure settings, the frames are synchronized so keeping the same transformation/blending should provide a decent result.

Question: how to get all data from stitching for an optimized real-time stitcher?

EDIT 1

I have found the class detail::CameraParams here: http://docs.opencv.org/2.4/modules/st...

I can then get the camera matrixes of each image.

Now, how can I get all info about the blending zone between two images?

edit retag flag offensive close merge delete

Comments

@gregoireg could you share your final code/result?

thealse gravatar imagethealse ( 2016-12-22 02:30:07 -0600 )edit

2 answers

Sort by » oldest newest most voted
1

answered 2016-08-13 14:47:28 -0600

Tetragramm gravatar image

updated 2016-08-13 14:47:57 -0600

I believe that the composePanorama does exactly that. Simply create a new vector with your left and right images, call composePanorama(newVec, newPano) and there you have it.

edit flag offensive delete link more

Comments

Good idea but not really unless I have done something wrong. The exact following code:

std::vector<Mat> imgX2;
imgX2.push_back(imgA);
imgX2.push_back(imgB);

    Mat pano;
Stitcher stitcher = Stitcher::createDefault(false);
double t = (double)cv::getTickCount();
stitcher.stitch(imgX2, pano);
t = ((double)cv::getTickCount() - t)/cv::getTickFrequency();
std::cout << "Time passed in seconds: " << t << std::endl; 

Mat panoFaster;
t = (double)cv::getTickCount();
stitcher.composePanorama(imgX2, panoFaster);
t = ((double)cv::getTickCount() - t)/cv::getTickFrequency();
std::cout << "Time Faster passed in seconds: " << t << std::endl;

returns:

Time passed in seconds: 1.79262
Time Faster passed in seconds: 1.67719

Shouldn't I see a major difference?

gregoireg gravatar imagegregoireg ( 2016-08-14 13:18:49 -0600 )edit

For two images? Probably not. For many? Oh yes.

Try setting the exposure compensator to the NoExposureCompensator since you said the cameras are already synced in that regard.

Tetragramm gravatar imageTetragramm ( 2016-08-14 18:05:52 -0600 )edit

Thanks for your help. Still not what I want. I have edited the question.

gregoireg gravatar imagegregoireg ( 2016-08-15 00:21:46 -0600 )edit
1

You can turn everything off and just use the warper returned from this method. I'm not sure how precisely the dst image is formatted or what you have to do.

Tetragramm gravatar imageTetragramm ( 2016-08-15 18:19:03 -0600 )edit
0

answered 2017-06-05 06:05:36 -0600

vin gravatar image

If you want to build panorama from videos or live camera, try this link

edit flag offensive delete link more

Question Tools

2 followers

Stats

Asked: 2016-08-13 01:06:03 -0600

Seen: 7,028 times

Last updated: Aug 15 '16