Ask Your Question
0

calibrateHandEye precision

asked 2019-08-06 16:07:31 -0600

dahonora gravatar image

updated 2019-08-07 11:22:37 -0600

Hi there!

I am using the calibrateHandEye functions from the Calib3d library to get the handEye calibration of a UR10 robot with a camera RealSense fixed to the end effector. After some problems I think, that I got to get the function work properly. But now I am seeing two things that do not fit completely, and I was hoping for a third opinion from your point of view.

As you can see from the image below, I made 4 four measurements for each of the 5 methods that the function proposes. For each of the methods I used 10, 15, 18 and 20 robot poses with their respective pictures. From here I got a little confused:

  1. The first method (CALIB_HAND_EYE_TSAI) give completely different values depending on the amount of robot poses used. At the same time, and with the same values, the others 4 methods seem to converge to certain values close enough between them. Why is that?
  2. Though the values of the last 4 methods seem to converge, between them there are important differences. Why is that?

Thank you for the help!

The camera intrinsic parameters are:

std::vector<double> distortionCoefficients(5);  // camera distortion
distortionCoefficients[0] = 9.6349551984637724e-02;
distortionCoefficients[1] = -3.3260675111130217e-01;
distortionCoefficients[2] = 0;
distortionCoefficients[3] = 0;
distortionCoefficients[4] = 2.5833277679122602e-01;

double f_x = 1.2993539019658076e+03; // Focal length in x axis
double f_y = 1.2993539019658076e+03; // Focal length in y axis (usually the same?)
double c_x = 960; // Camera primary point x
double c_y = 540; // Camera primary point y

cv::Mat cameraMatrix(3, 3, CV_32FC1);
cameraMatrix.at<float>(0, 0) = f_x;
cameraMatrix.at<float>(0, 1) = 0.0;
cameraMatrix.at<float>(0, 2) = c_x;
cameraMatrix.at<float>(1, 0) = 0.0;
cameraMatrix.at<float>(1, 1) = f_y;
cameraMatrix.at<float>(1, 2) = c_y;
cameraMatrix.at<float>(2, 0) = 0.0;
cameraMatrix.at<float>(2, 1) = 0.0;
cameraMatrix.at<float>(2, 2) = 1.0;

Link to get the images and poses.

image description

edit retag flag offensive close merge delete

Comments

In the picture assume the ',' are '.' That´s the Spanish way, sorry for that. Is in mm and degrees

dahonora gravatar imagedahonora ( 2019-08-06 16:57:16 -0600 )edit

You may want to dump the values of R_gripper2base, t_gripper2base, R_target2cam, t_target2cam in some text files for each of the experiments, and possibly with the captured images of the chessboard. Roughly, you should be able to measure the translation between the camera and the gripper and say approximatively which one is the closest to the reality. Also, what is the output of the MVTec?

Eduardo gravatar imageEduardo ( 2019-08-07 04:09:02 -0600 )edit
1

Hi Eduardo! Good to see you again helping with this. I added to the original question a link to download the images and the matrixes with the robot poses. I couldn't upload the file directly here, don't know why. The approximate translation between the camera and tool was measured as: [100; 80; -130; 0; 0; 180] The results MVTec was giving: [100.147; 75.1024; -117.844; 0.89715; 359.675; 179.285]

dahonora gravatar imagedahonora ( 2019-08-07 08:15:01 -0600 )edit

Could you edit your question to add the camera intrinsic parameters?

Eduardo gravatar imageEduardo ( 2019-08-07 11:19:44 -0600 )edit

Sure, it's done! I got the calibration with the same images of the link above.

dahonora gravatar imagedahonora ( 2019-08-07 11:22:40 -0600 )edit

The target2cam transformations look ok, here the images with the chessboard frame displayed.

I have compared the results with another library that gives the following transformation:

``` -0.9999710382 0.001876093187 -0.00737583906 87.59821339

-0.002064735358 -0.9996688056 0.02565182378 85.27051241

-0.007325271012 0.02566631001 0.999643727 -88.15990751

0 0 0 1

rx: -1.469942466 ; ry: -0.4226082804 ; rz: -179.8925048 ```

The hand-eye calibration process is a non-linear problem so I am not surprised having variable results (around the centimeter) between 10 and 20 data and between the last 4 methods. I am surprised by the inconsistency of the Tsai method though.

Eduardo gravatar imageEduardo ( 2019-08-07 12:13:53 -0600 )edit

Thank you so much for the feedback, Eduardo. Also for me, the results of the Tsai method were weird. It seems like a bug. Great to have the results of the Visual Servoing Platform. I had some plans to give it a try as well. Now is not necessary. I need a precision of ~1mm in the calibration. Any way to increase precision?

dahonora gravatar imagedahonora ( 2019-08-07 12:25:33 -0600 )edit

For Hand-Eye calibration, I would use multiple images (20 is good in my opinion) acquired in a half-sphere in order to correctly cover the space and the rotation.

You may want to calibrate the camera with a non-symmetric chessboard pattern like this one if you use OpenCV for camera calibration. The chessboard pattern must be glued on a surface as flat as possible for accurate results. Also, check the reprojection error to have an idea of the quality of the camera calibration.

The offset in Z could be explained by the camera optical center position that can be hard to guess I think.

Eduardo gravatar imageEduardo ( 2019-08-07 12:27:03 -0600 )edit

What's the difference in the chessboard you are suggesting? I don't see any difference, besides the size of the squares and the number of them. Could you please explain to me how to check the reprojection error of the camera calibration? I will try the tips you suggest. The offset in Z is anyway strange. The distance between the tool and the closer camera case side is 130mm. So the calibration values should be necessarily more than that. Since the camera itself is about 20mm depth, the value of Z should be around 130..150mm, shouldn't they?

dahonora gravatar imagedahonora ( 2019-08-07 12:36:06 -0600 )edit

The Tsai method can be easily checked using some other implementation. You can also try another toolbox (see the answer).

If you are doing image-based visual servoing, this control scheme is intrinsically immune to calibration errors. If you are doing pose-based visual servoing and want 1mm precision, you will have to carefully calibrate first the camera. This will be a long trial and redo process.

Eduardo gravatar imageEduardo ( 2019-08-07 12:39:46 -0600 )edit

2 answers

Sort by » oldest newest most voted
0

answered 2019-08-09 12:41:04 -0600

dahonora gravatar image

updated 2019-08-09 12:42:11 -0600

The problem was with the calibration of the camera itself. I was assuming, in the OpenCV calibration, that the aspect ratio of the focal length was the same for x and y, and that the camera primary point was in the center. Checking the documentation of the RealSense I realized that neither os those assumptions was correct. I forgot about that, and extract the calibration intrinsic parameters directly from the camera API. With that fixed, and the chessboard improved, I got the results of the picture below. All of them with 20 poses.

image description

As you can see the results now are much closer. Thank you so much for the help!

edit flag offensive delete link more

Comments

Thanks for the follow-up.

Eduardo gravatar imageEduardo ( 2019-08-09 17:57:08 -0600 )edit

Excuse me, could you please tell me how to get the value of rx 、ry and rz?

Min-Rou gravatar imageMin-Rou ( 2020-04-25 00:39:11 -0600 )edit
0

answered 2020-01-02 09:05:12 -0600

HELLO, i want to calculate the handEye calibration through this function. I just got 6 poses. But the results was different when i used 6 poses, 5poses and 4 poses to calculte. could you tell me after you have corrected your assumptions, whether you get different results when you used 10poses and 15 poses, 20 poses. i am a new in this study, and this is very impotant to me, I look forward your reply.

edit flag offensive delete link more

Question Tools

1 follower

Stats

Asked: 2019-08-06 16:07:31 -0600

Seen: 4,168 times

Last updated: Jan 02 '20