Ask Your Question
2

Speed up for remap with convertMaps

asked 2016-08-27 14:37:21 -0600

updated 2016-08-27 14:39:23 -0600

Hello!

I'm currently working on distorting an image with remap to simulate a camera. This is how I create the maps:

Mat float_distortion_map_x(height_, width_, CV_32FC1);
Mat float_distortion_map_y(height_, width_, CV_32FC1);

Point2f undistorted;
for (int x=0; x<width_; ++x)
    for (int y=0; y < height_; ++y)
    {
        {
            undistort_point(Point2f(x, y), undistorted); // wrapper for cv::undistortPoints
            float_distortion_map_x.at<float>(y, x) = undistorted.x;
            float_distortion_map_y.at<float>(y, x) = undistorted.y;
        }
    }

#if 1
    ROS_INFO("NO Conversion");
    float_distortion_map_x.copyTo(distortion_map_1_);
    float_distortion_map_y.copyTo(distortion_map_2_);
#else
    ROS_INFO("CONVERTING");
    distortion_map_1_ = Mat(height_, width_, CV_16SC2);
    distortion_map_1_ = Mat(height_, width_, CV_16UC1);
    convertMaps(float_distortion_map_x, float_distortion_map_y,  distortion_map_1_, distortion_map_2_, CV_16SC2, false);
#endif

The results look great, If I run a cv::undistort on the result, I again get my original image, so that this part should be ok. However, the remap documentation claims a speed up of around 2 for the converted version which I cannot reproduce. I called remap with different interpolation types and the run time for a VGA image have been (mean for 100 conversions)

> LANCZOS4: 33ms,  
> Linear: 0.5ms, 
> Cubic: 1.3ms

But the same for the original maps and the converted ones. I could not see any difference. Has anyone also tested this claim and found something similar?

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
1

answered 2016-08-28 17:23:51 -0600

Tetragramm gravatar image

Well, I tested this out, and you are correct, I don't see much difference. 12.7 vs 11.2 seconds. Until that is, I call setUseOptimized(false) and turn off at least some of the AVX code. Then it becomes 20.2 vs 16.0 seconds. (Summed across 10000 runs)

I suspect a combination of hand coded and compiler optimized AVX code is allowing the float data to be processed nearly as fast as the 16 bit data. The 16bit is still faster though.

As a side note, you should consider using the function cv::initUndistortRectifyMap, as it does exactly what you are doing here, probably faster, and with tests to make sure it works.

edit flag offensive delete link more

Comments

'cv::initUndistortRectifyMap' That should be the inverse function. It creates map so that it applies the distortion model to a pixel. (The pixel at (x,y) in my dst image is seen at map(x, y)=distort(x, y) in the src image). Thanks for also running an evaluation. Have you also measured (or estimated) the variance in the runs? I had some iterations where the remap was around 2 or 3 times slower than in the mean case.

FooBar gravatar imageFooBar ( 2016-08-29 03:07:23 -0600 )edit
1

Oh, I see. Well, carry on then.

The very first run took about 40% longer, but that was just my computer waking up. Every run after that was right about the same, give or take 0.2 seconds.

Tetragramm gravatar imageTetragramm ( 2016-08-29 07:45:32 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2016-08-27 14:37:21 -0600

Seen: 3,646 times

Last updated: Aug 28 '16