Ask Your Question
1

Stitching module's strange behavior (Part II)

asked 2012-10-03 04:28:35 -0600

zenberg gravatar image

updated 2012-10-04 01:52:16 -0600

This question is directly connected to the original one here.

Internally, in the stitching module it's assumed that an input image is a photo from camera (it's used for camera parameters estimation). The assumption is false for _pano1.jpg as it's not a photo, but a few transformed images stitched.

My app's algorithm has to look more or less like this: When a user makes photos while rotating the iPhone these photos should be stitched in the background one after another and this means that there will inevitably be a situation when I'll have to pass the already stitched photos to the Stitching module.

Can you please tell me how this can be implemented if the Stitching module stitches only the original photos from camera?
I just can't afford to wait until the user of my app will make all the photos and only after that stitch them all at once because it can take more than 5 minutes on the iPhone 4 and no one will be willing to wait this long, they'll just quit the app and delete it.

And there's another thing. This _pano2.jpg image is the result of stitching of the first two images(so it's not the original photo from camera). Despite this I can stitch it with 3.jpg and the result of this operation can be seen on _pano1.jpg. I guess this proves that the Stitching module is capable of stitching modified photos as well and not only the original ones.
Why in this case it doesn't want to stitch _pano1.jpg and 4.jpg?


Too low match confidence means that all (or almost) matches will be classified as good ones. But when all matches are good the method decides that the images are too similar and doesn't stitch them. It's reasonable, for instance, when a camera is fixed and there is a small moving object. In such case almost all matches are good, but it's better not to stitch two images, and take only one as output.

This is one more thing that I simply can't understand.
Why it tells me that the images are too similar when I'm trying to stitch _pano1.jpg and 4.jpg even though there's a 20-degree difference in camera movement on 4.jpg relatively to 3.jpg(that is a part of _pano1.jpg) and there certainly is some information out there that the 3.jpg lacks(for example the door and the other part of the wall)?



Thanks again for your time Alexey. I really appreciate it.

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
1

answered 2012-10-04 02:41:59 -0600

Alexey Spizhevoy gravatar image

Hi,

The stitching module does assume internally that an input image is a photo. This fact is used when camera parameters (such as focal length, aspect ratio, etc) are estimated. But if you give a panorama image instead the camera parameters estimation step won't fail immediately. It can even return something close to actual parameters, but there's no any guarantee on that.

If you want to stitch images one by one, as you described, then you can build your own image mosaicing pipeline. The stitcher class is a pipeline built on top of a lot auxiliary classes, all of them are avilable in the cv::detail namespace.

Regarding the mosaicing confidence threshold. As I said too low values may lead to incorrect behaviour. If changing this parameter doesn't help, you can try to decrease Stitcher::panoConfidenceThresh.

edit flag offensive delete link more

Comments

Hello Alexey. Can you please help me with this related question? http://answers.opencv.org/question/3165/decreasing-the-stitching-time/

zenberg gravatar imagezenberg ( 2012-10-15 04:23:53 -0600 )edit

Question Tools

Stats

Asked: 2012-10-03 04:28:35 -0600

Seen: 1,639 times

Last updated: Oct 04 '12