Hello,
I use the findEssentialMatrix function on a set of feature points (~ 1200 points) and then I use triangulatePoints function to recover the 3D positions of those feature points. But I have a problem with the findEssentialMatrix function because it seems that the result changes according to the number of points.
For example, if I use 1241 points for one frame, the result is quite good (R= 0.5,0.5,0.5 and t=1,0,0) and if I remove only one point the result is totally different (R=3.0,2.0,2.0 and t=0,0,1). I tried to remove other feature points and sometimes it works and sometimes not. I don't understand why. Is there a reason for that ?
std::vector<cv::Point2d> static_feature_point_t;
std::vector<cv::Point2d> static_feature_point_tmdelta;
for(int i=0; i<NB_POINTS; i++)
{
cv::Point2f point_t(data->getFeaturePointPosition2DAtT(i)); // cv::Mat to cv::Point2f
cv::Point2f point_tmdelta(data->getFeaturePointPosition2DAtTMDelta(i)); // cv::Mat to cv::Point2f
static_feature_point_t.push_back(point_t);
static_feature_point_tmdelta.push_back(point_tmdelta);
}
read from file
cv::FileStorage fs_t("static_feature_point_t.yml", cv::FileStorage::READ);
cv::FileStorage fs_tmdelta("static_feature_point_tmdelta.yml", cv::FileStorage::READ);
cv::FileNode feature_point_t = fs_t["feature_point"];
cv::FileNode feature_point_tmdelta = fs_tmdelta["feature_point"];
read(feature_point_t, static_feature_point_t);
read(feature_point_tmdelta, static_feature_point_tmdelta);
fs_t.release();
fs_tmdelta.release();
double focal = getFocal();
300.;
cv::Point2d camera_principal_point(data->getImageCol()/2, data->getImageRow()/2);
camera_principal_point(320, 240);
cv::Mat essential_matrix = cv::findEssentialMat(feature_point_t, feature_point_tmdelta, cv::findEssentialMat(static_feature_point_t, static_feature_point_tmdelta, focal, camera_principal_point, cv::LMEDS);
cv::Mat rotation, translation;
cv::recoverPose(essential_matrix, feature_point_t, feature_point_tmdelta, static_feature_point_t, static_feature_point_tmdelta, rotation, translation, focal, camera_principal_point);
cv::Mat rot(3,1,CV_64F);
cv::Rodrigues(rotation, rot);
std::cout << "rotation " << rot*180./M_PI << std::endl;
std::cout << "translation " << translation << std::endl;
The two lists of feature points are here
(I didn't find how to upload files on the forum or if it is possible)
Thanks,