Ask Your Question
2

projectPoints functionality question

asked 2016-06-14 21:19:07 -0600

bfc_opencv gravatar image

I'm doing something similar to the tutorial here: http://docs.opencv.org/3.1.0/d7/d53/t... regarding pose estimation. Essentially, I'm creating an axis in the model coordinate system and using ProjectPoints, along with my rvecs, tvecs, and cameraMatrix, to project the axis onto the image plane.

In my case, I'm working in the world coordinate space, and I have an rvec and tvec telling me the pose of an object. I'm creating an axis using world coordinate points (which assumes the object wasn't rotated or translated at all), and then using projectPoints() to draw the axes the object in the image plane.

I was wondering if it is possible to eliminate the projection, and get the world coordinates of those axes once they've been rotated and translated. To test, I've done the rotation and translation on the axis points manually, and then use projectPoints to project them onto the image plane (passing identity matrix and zero matrix for rotation, translation respectively), but the results seem way off. How can I eliminate the projection step to just get the world coordinates of the axes, once they've been rotation and translated? Thanks!

edit retag flag offensive close merge delete

2 answers

Sort by ยป oldest newest most voted
3

answered 2016-06-15 08:11:22 -0600

Eduardo gravatar image

I am not sure to understand you correctly when you said:

(which assumes the object wasn't rotated or translated at all)

to eliminate the projection, and get the world coordinates of those axes once they've been rotated and translated

Anyway, even if it does not answer your question I will try to add some useful information.


What you have is the camera pose or the transformation matrix that allows to transform a coordinate in the world frame to the corresponding coordinate in the camera frame:

Eq1

Eq2

The perspective projection projects the 3D coordinates in the camera frame to the image plane according to the pinhole camera model:

Eq3

The full operation when you want to draw for example the world frame origin in the image plane should be:

Eq4

Now if you know the geometric transformation between two frames w1 and w2:

Eq5

To draw the coordinate of a point in frame w2 onto the image plane, you have to first compute its coordinate in the camera frame:

Eq6

This figure should illustrate the situation:

Figure

I hope that what I have written is mostly correct.

PS: you can read this answer for other details about homogeneous transformation.

edit flag offensive delete link more

Comments

So if I understand you correctly, I just need to rotate and translate the axis using rvecs and tvecs, and that should be what I need.

The problem is, I use solvePnP to get the rotation and translation for a face. And I draw an axis on the face and use projectPoints to draw it on the image plane. I want to find the world coordinates of the axis after its rotated. What I did was find the rotation matrix using Rodrigues, and do rotation_matrix*axis_point + tvec. And then used projectPoints to project the result of that onto the image plane (passing identity matrix and zero matrix for rotation, tvec respectively). But the result of that was different than what I got from just using projectPoints normally. Why is that?

bfc_opencv gravatar imagebfc_opencv ( 2016-06-15 17:31:01 -0600 )edit

It should be the same result:

  cv::Mat K = (cv::Mat_<double>(3,3) <<
      700, 0, 320,
      0, 700, 240,
      0, 0, 1);

  double theta = 24.0 * M_PI / 180.0;
  cv::Mat rvec = (cv::Mat_<double>(3,1) <<
      0.4, 0.2, 0.8944) * theta;
  cv::Mat R;
  cv::Rodrigues(rvec, R);    

  cv::Mat tvec = (cv::Mat_<double>(3,1) <<
        0.5, 0.38, 1.4);    

  cv::Mat mat_point_x = (cv::Mat_<double>(3,1) <<
      1, 0, 0);

  std::vector<cv::Point3f> object_points;
  cv::Point3f object_point_x(1, 0, 0);
  object_points.push_back(object_point_x);
  std::vector<cv::Point2f> image_points;
Eduardo gravatar imageEduardo ( 2016-06-16 04:32:26 -0600 )edit

Code:

cv::projectPoints(object_points, rvec, tvec, K, cv::noArray(), image_points);

  std::cout << "image_point_x=" << image_points.front() << std::endl;

  cv::Mat cam_image_point_x = R * mat_point_x + tvec;
  cv::Point3f cam_image_point_x2(
      cam_image_point_x.at<double>(0), cam_image_point_x.at<double>(1), cam_image_point_x.at<double>(2));
  object_points.clear();
  object_points.push_back(cam_image_point_x2);

  image_points.clear();
  cv::projectPoints(object_points, cv::Mat::eye(3, 3, CV_64F), cv::Mat::zeros(3,1,CV_64F), K, cv::noArray(), image_points);
  std::cout << "image_point_x_2=" << image_points.front() << std::endl;
Eduardo gravatar imageEduardo ( 2016-06-16 04:34:45 -0600 )edit

Got it, thanks!

bfc_opencv gravatar imagebfc_opencv ( 2016-06-16 21:47:40 -0600 )edit
0

answered 2019-02-10 20:25:23 -0600

sverma gravatar image

You can use cv::aruco::drawAxis which displays your axis with the given rvec and tvec. Remember to add #include "opencv2/aruco.hpp"

edit flag offensive delete link more

Comments

There is now drawFrameAxes() in calib3d module (OpenCV >= 4.0.1 or OpenCV >= 3.4.5). No more required to build the contrib modules if not needed.

Eduardo gravatar imageEduardo ( 2019-02-13 03:27:38 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2016-06-14 21:19:07 -0600

Seen: 13,099 times

Last updated: Feb 10 '19