Hello!
I'm currently working on distorting in image with remap to simulate a camera. This is how I create the maps:
Mat float_distortion_map_x(height_, width_, CV_32FC1);
Mat float_distortion_map_y(height_, width_, CV_32FC1);
Point2f undistorted;
for (int x=0; x<width_; ++x)
for (int y=0; y < height_; ++y)
{
{
undistort_point(Point2f(x, y), undistorted); // wrapper for cv::undistortPoints
float_distortion_map_x.at<float>(y, x) = undistorted.x;
float_distortion_map_y.at<float>(y, x) = undistorted.y;
}
}
#if 1
ROS_INFO("NO Conversion");
float_distortion_map_x.copyTo(distortion_map_1_);
float_distortion_map_y.copyTo(distortion_map_2_);
#else
ROS_INFO("CONVERTING");
distortion_map_1_ = Mat(height_, width_, CV_16SC2);
distortion_map_1_ = Mat(height_, width_, CV_16UC1);
convertMaps(float_distortion_map_x, float_distortion_map_y, distortion_map_1_, distortion_map_2_, CV_16SC2, false);
#endif
The results look great, If I run a cv::undistort on the result, I again get my original image, so that this part should be ok. However, the remap documentation claims a speed up of around 2 for the converted version which I cannot reproduce. I called remap with different interpolation types and the run time for a VGA image have been (mean for 100 conversions)
> LANCZOS4: 33ms,
> Linear: 0.5ms,
> Cubic: 1.3ms
But the same for the original maps and the converted ones. I could not see any difference. Has anyone also tested this claim and found something similar?