Ask Your Question
0

Perspective transform and lens undistortion in one step?

asked 2014-06-02 06:53:26 -0600

Witek gravatar image

updated 2014-06-02 07:26:42 -0600

I need to find the perspective transform of the street plane WITHOUT the knowledge of the lens distortions, but they have to be eliminated too, of course. What I have is the image (with strong distortions) and real (ground truth) coordinates of some keypoints in this image (e.g. around zebra). I know this is possible, I just do not know how to do it. My aim is to get the street coordinates of objects on the street level. Anybody?

image description

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
1

answered 2014-06-03 07:47:35 -0600

kbarni gravatar image

updated 2014-06-03 07:50:20 -0600

You should do it in two steps. Perspective transform uses the carthesian (x,y) coordinates, lens distrotion correction uses polar coordinates (rho, phi) relative to the center of the image. Theoretically it might be possible to create a complicated formula for this in one step (you write all the transformations as matrices [carthesian->polar->undisort->carthesian->perspective], then multiply them), but I don't see any reason to do this.

However if your camera is static, you could precompute a displacement image, containg for each (x,y) pixel the coordinates of the transformed (x',y') pixel - this would speed up considerably the processing, and would give you directly the coordinates of the objects on the street.

If the camera is not static and you need to get the homography matrix for each image, you have to undisort the image anyway before further operations.

edit flag offensive delete link more

Comments

I don't think I can do it with a chain of matrices as the my distortions are quite strong and cannot be represented by a simple linear transformation in polar coordinates. But I might be wrong.

The problem is I do not have any parameters of the camera. I could theoretically calibrate it using parallel lines and vanishing points (not really sure how to do it), undistort the image and then find the perspective transform...

I was rather looking for some transformation (and interpolation) between two sets of 3D points. One set consist of points in the image X,Y and Z (Z=0 for street level, curb will be, let's say, Z=0,1 etc) and the other set consists of real-world X,Y,Z (Z should be the same) coordinates of these points. Since I am not able to cover the entire image with dense points...

Witek gravatar imageWitek ( 2014-06-03 09:26:56 -0600 )edit

Since I am not able to cover the entire image with dense points, I would like to be able to use the screen coords (with extra assumed Z coordinate) to estimate the real-world coordinates and be able to interpolate between the actually measured points. How do I calculate that?

Witek gravatar imageWitek ( 2014-06-03 09:29:13 -0600 )edit

Ok, so I suppose you have a static camera you don't know the parameters. Then you should build a displacement map. It's basically a discrete function f(p_image)=p_world.

You take as many points from the image as you can and define the real world positions. Try to stay in 2D (x,y->x',y'). Then, get the whole displacement image by interpolating the pixels defined in the previous step. As the lines in the image are curved, you should use a spline-based interpolation (like bicubic).

To get the world Z coordiantes, you could simply draw a mask (street=0, curb=1, grass=2...)

kbarni gravatar imagekbarni ( 2014-06-03 09:53:43 -0600 )edit

OK then, is there a tool that would do this interpolation and generate a 2D displacement image with, say, 1 pixel density? I'm sure there is. interp3 in Matlab? Or interp2 separately for X and Y?

Witek gravatar imageWitek ( 2014-06-03 10:32:19 -0600 )edit

In this case you'll have a sparse matrix (<100 elements with random position for the whole image). To interpolate it, you'll have to use something like ScatteredInterpolant or inpaint_nans in Matlab.

kbarni gravatar imagekbarni ( 2014-06-04 02:10:06 -0600 )edit

Thank you all for suggestions. I will give it probably a try when I have some free time. I did contact some people using this method, and it turned out that they first do the calibration (Tsai model) and than undistortion with perspective transformation. Unfortunately, I am afraid that Tsai model is not good enough with strong lens distortions... I would probably need a more precise method of esimating camera parameters from a single view and matched 2D points. Anyone knows one?

Witek gravatar imageWitek ( 2014-06-09 09:15:22 -0600 )edit

Question Tools

Stats

Asked: 2014-06-02 06:53:26 -0600

Seen: 2,039 times

Last updated: Jun 03 '14