Ask Your Question
1

The limitation of solvePnP() for pose estimation

asked 2017-03-15 01:53:01 -0600

yorkhuang gravatar image

Please correct me if I am wrong. From my understanding and posts from Internet, all the successful pose estimation results by solvePnP() were setting the original of the world coordinate system at the planar image or marker. Am I right? What if the original of the world coordinate system is NOT on the planar image or marker, will solevPnP() give a correct pose estimation w.r.t. the image or marker? Can anyone answer me? Thanks in advance.

edit retag flag offensive close merge delete

Comments

http://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#solvepnp (solvePnP) function can solve arbitrary pose estimation problems, not constrained with the planar case. You can see from the documentation opencv uses p3p or epnp algorithms. These algorithms are general.

swing gravatar imageswing ( 2017-03-15 07:30:11 -0600 )edit

Hi, Swing, I did read the link and tried solve PnP() over and over again. I always failure to get a stable pose estimation no matter the training data are from planar or non-planar. I am not sure what I went wrong but I am skeptical about the result of solvePnP().

yorkhuang gravatar imageyorkhuang ( 2017-03-15 20:52:00 -0600 )edit

1 answer

Sort by ยป oldest newest most voted
0

answered 2017-03-15 17:22:29 -0600

Tetragramm gravatar image

You just set your world points appropriately. Assuming a chessboard target for the moment, instead of having one corner be (0,0,0), you would have that corner be whatever it is the coordinate system you're trying to use.

Then you get the information relative to that coordinate system and you carry on your merry way.

edit flag offensive delete link more

Comments

Hi, Tetragramm, thank you for your help. Can you more specific about "set your world points appropriately"? The reason I asked is I tried a poster on the wall and notice that computed pose by solvePnP() will slightly deviated with different Z values. I set the top-left corner of the poster as the origin of the world coordinate system and I am using OpenCV 2.4.13.2. Do I miss anything? Please advise. Thanks,

yorkhuang gravatar imageyorkhuang ( 2017-03-15 20:47:03 -0600 )edit

Your z values differ with what?

If you want something besides the poster to be origin, say a corner of the room, you just measure where each point is relative to the corner of the room and use that as the world points.

Remember that the results you get from solvePnP are the object relative to the camera. You have to reverse it to get the location of the camera with respect to the world coordinate system.

Tetragramm gravatar imageTetragramm ( 2017-03-15 20:52:54 -0600 )edit

Hi, Tetragramm, my experiment increase Z value step by step and left X,Y values to be fixed. The return value after inverse it become world's pose, I got a slight different XY values. In order to ensure the experiment processes are exactly the same, I took a sequence of query images and use the same sequence images for the experiment each time. About the measurement of point coordinates w.r.t. world system, should it be meter or centimeter? And, what if the case of such measurement is impossible and only estimated object 3D coordinates are available, will be affect the solvePnP() result significantly?

yorkhuang gravatar imageyorkhuang ( 2017-03-15 21:22:09 -0600 )edit

The units can be whatever you prefer, just set the world points in those units and that's what you'll get as output.

If only estimated coordinates are available, you results will only be as good as your estimates. So if you have about 1cm uncertainty, you're only a few cm off in results. If you are just guessing, you're screwed.

Tetragramm gravatar imageTetragramm ( 2017-03-15 22:23:51 -0600 )edit

Total understand. Thank you for your answer. I raise another question about input 3D coordinates. Since I am not qualify to add link, please refer to the question "Will the distribution of 3D coordinates affect the accuracy of solvePnP?" Thanks,

yorkhuang gravatar imageyorkhuang ( 2017-03-16 08:43:05 -0600 )edit

Yes, but the cases where it does are trivial. All the points in a line, multiple points projecting to the same pixel, other cases like that where the points don't provide true or complete information.

Tetragramm gravatar imageTetragramm ( 2017-03-16 20:31:33 -0600 )edit

Hi, Teragram, thank you for your reply. Yes, I do notice that the case of multiple points projecting to the same pixel and filter them out before input to solvePnP. Still, I can not get the stable pose estimation while moving toward the target image. So, the revised question is, what if the cases you mentioned are eliminated, will the distribution of 3D coordinates affect the accuracy of solvePnP? Thanks,

yorkhuang gravatar imageyorkhuang ( 2017-03-16 21:01:38 -0600 )edit

Just how unstable is it? How far are you from the surface? I'm going to give you some karma so you can post pictures.

I know I can 3D reconstruct a scene and use points in the scene to locate the camera to within mm of the true location. So solvePnP is good when it's got good data.

Tetragramm gravatar imageTetragramm ( 2017-03-16 21:10:16 -0600 )edit

Hi, Teragramm, thank you for your patient. The final goal of my experiment is, given a known physical location, I will place a poster several meters, says 5 meter, away from that location. So, I set that location as the origin of the world coordinate system. Knowing the distance and orientation of the poster w.r.t. that world coordinate system, I compute the feature points on the poster and derive their 3D coordinates w.r.t. that world coordinate system, I want to compute the pose of the smartphone( relative to that physical location) when facing that poster. My plan is to compute user's physical location through this approach. Is it possible to do that?

yorkhuang gravatar imageyorkhuang ( 2017-03-18 23:43:40 -0600 )edit

It is certainly possible, but you need to have the location of the poster relative to the origin very tightly measured. And you're going to vary in your estimation by at least 10cm relative to the poster. Just with a different combination of pictures or a minimization to a slightly different location on the corners.

If your poster->origin is off by some amount, add that uncertainty to the 10cm to get your uncertainty with respect to the origin.

Tetragramm gravatar imageTetragramm ( 2017-03-19 08:30:30 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2017-03-15 01:53:01 -0600

Seen: 2,131 times

Last updated: Mar 15 '17