Ask Your Question
0

Match 2d keypoints to 3d Triangulated points using three sets of descriptors?

asked 2017-07-05 04:32:26 -0600

antithing gravatar image

updated 2017-07-11 21:31:34 -0600

I am having trouble with a feature matching workflow, and am looking for some help.

I have a stereo camera, and am triangulating points from it, using feature matching. In frame 1, I match points between the left and right image, and triangulate them. In frame 2, I match points between frame 1 and frame 2, in the left frame only.

Now, I need to find correspondences between the matched frame 2 keypoints, and the triangulated 3d points, for solvePnp.

My function is:

void Tracking::PnpTests()
{
    cv::Mat rvec, tvec, rvec2, tvec2;
    std::string frames = "00";

    //sequential
    boost::circular_buffer<cv::Mat> frameArray((2));
    //storage for keypoints/descriptors
    cv::Mat descCurrent;
    cv::Mat descCurrentR;
    cv::Mat descPrevious;
    std::vector<cv::KeyPoint> keyPntsCurrent;
    std::vector<cv::KeyPoint> keyPntsGoodL;
    std::vector<cv::KeyPoint> keyPntsGoodR;
    std::vector<cv::KeyPoint> keyPntsCurrentMatch;
    std::vector<cv::KeyPoint> keyPntsCurrentR;
    std::vector<cv::KeyPoint> keyPntsPrevious;
    std::vector<cv::Point3f> Points3d;
    cv::Mat descCurrentMatched;// = cv::Mat(descCurrent.rows, descCurrent.cols, cv::DataType<float>::type);

                               // Retrieve paths to images
    vector<string> vstrImageLeft;
    vector<string> vstrImageRight;
    vector<double> vTimestamps;
    LoadImages2(vstrImageLeft, vstrImageRight, vTimestamps);

    const int nImages = vstrImageLeft.size();
    cv::Size boardSize(8, 6);



    //tringuulate stuff
    std::vector<cv::Point3f> objectPointsTri;
    std::vector<cv::Point3f> objectPointsGood;
    std::vector<cv::KeyPoint> keyPntsTriReturn;
    std::vector<cv::KeyPoint> keyPntsGood;
    std::vector<cv::Point2f> projectedPoints;
    std::vector<cv::DMatch> matchR;
    std::vector<cv::DMatch> match;


    // Main loop

    int frameNumber = 0;
    cv::Mat imLeft, imRight, imStored;
    for (int ni = 0; ni < nImages; ni++)
    {
        imLeft = cv::imread("frames/left/" + vstrImageLeft[ni], CV_LOAD_IMAGE_UNCHANGED);
        imRight = cv::imread("frames/right/" + vstrImageRight[ni], CV_LOAD_IMAGE_UNCHANGED);
        if (imLeft.empty())
        {
            cerr << endl << "Failed to load image at: "
                << string(vstrImageLeft[ni]) << endl;
        }

        if (bFirstRun == false) // every run. 
        {
            int64 t01 = cv::getTickCount();

            //use features.
            tFeatures->DetectKeypointsL(imLeft, descCurrent, keyPntsCurrent);

            //knn brute force match to previous frame 
            match = tPointMatching->matchPointsOG2(descPrevious, descCurrent);


            Mat img_matches2;
            cv::drawMatches(Mat(imStored), keyPntsPrevious, Mat(imLeft), keyPntsCurrent, match, img_matches2);
            cv::namedWindow("matches2", 0);
            cv::imshow("matches2", img_matches2);
            cv::waitKey(1);


            //start tracker loop
            if (match.size() >= 5)
            {
                objectPointsGood.clear();
                keyPntsGood.clear();

                for (cv::DMatch& m : match)
                {
                    //use matched keys
                    cv::Point3f pos = objectPointsTri[m.trainIdx];
                    cv::KeyPoint img = keyPntsCurrent[m.queryIdx];

                    objectPointsGood.push_back(pos);
                    keyPntsGood.push_back(img);


                }


                //solve
                if (objectPointsGood.size() != 0)
                {
                    projectedPoints = tPnPSolvers->CvPnp(keyPntsGood,objectPointsGood, cameraMatrix, distCoeffs, rvec, tvec);
                }

                //flip
                cv::Mat RotMat;
                cv::Rodrigues(rvec, RotMat);
                RotMat = RotMat.t();
                tvec = -RotMat * tvec;


                //project
                for (int i = 0; i < projectedPoints.size(); i++)
                {
                    cv::drawMarker(imLeft, cv::Point(projectedPoints[i].x, projectedPoints[i].y), cv::Scalar(0, 0, 255), cv::MARKER_CROSS, 50, 10);

                }

            }


        }
        if (bFirstRun == true) //first time, store previous frame and get keys
        {

            cameraMatrix.zeros(3, 3, cv::DataType<float>::type);
            R.zeros(3, 3, cv::DataType<float>::type);
            t.zeros(3, 1, cv::DataType<float>::type);

            cv::FileStorage fs("CalibrationData.xml", cv::FileStorage::READ);
            fs["cameraMatrix"] >> cameraMatrix;
            fs["dist_coeffs"] >> distCoeffs;

            tFeatures->DetectKeypointsL(imLeft, descCurrent, keyPntsCurrent); //Left image, left descriptors, left keypoints
            tFeatures->DetectKeypointsR(imRight, descCurrentR, keyPntsCurrentR); //Right image, Right descriptors, Right keypoints


            //KNNMATCH MATCHING / FILTER RESULTS.
            std::vector<cv::DMatch> matchR ...
(more)
edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
1

answered 2017-07-06 17:56:00 -0600

Tetragramm gravatar image

updated 2017-07-14 19:08:59 -0600

Well, objectPointsTri has a 1<->1 mapping with descCurrent. And, matchPoints is (query, train), if you didn't reverse the parameters. So that means the trainIdx property of the match has the index in both descCurrent and objectPointsTri.

for(cv::DMatch& m : match)
{
    cv::Point3f pos = objectPointsTri[m.trainIdx];
    //Do stuff with it
}

EDIT:

Ok, to understand what you're doing, you need to break this apart into functions. You have one long block and that's probably part of why you're having trouble.

The functions you need that I'm not going to write for you are (and you already have some of these)

  1. DetectKeypoints (Takes image, returns keypoints and descriptors)
  2. MatchKeypoints (Takes descriptors, returns matches) Below I've used 1st arg as query, 2nd as train.
  3. FilterMatches (Takes matches, takes either keypoints or descriptors, takes indication if query or train, returns filtered list)
  4. Adjust3dPoints (Takes 3d points, rvec, tvec, returns the points in absolute coordinate frame

    if(firstFrame)
    {
        kpsL, descL = DetectKeypoints(left)
        kpsR, descR = DetectKeypoints(right)
        matches = MatchKeypoints(descL, descR)
        goodL = FilterMatches(kpsL, matches, query)
        goodD = FilterMatches(descL, matches, query)
        goodR = FilterMatches(kpsR, matches, train)
        3dPts = Triangulate(goodL, goodR)
    }
    else
    {
        kpsL, descL = DetectKeypoints(left)
        kpsR, descR = DetectKeypoints(right)
        matches = MatchKeypoints(goodD, descL)
        3dPts = FilterMatches(3dPts, query)
        goodNew = FilterMatches(kpsL, train)
        rvec, tvec = solvePnP(3dPts, goodNew)
    
        matches = MatchKeypoints(descL, descR)
        goodL = FilterMatches(kpsL, matches, query)
        goodD = FilterMatches(descL, matches, query)
        goodR = FilterMatches(kpsR, matches, train)
        3dPts = Triangulate(goodL, goodR)
        3dPts = Adjust3dPoints(3dPts, rvec, tvec)
    }
    

So each set of frames produces a list of keypoints for L an R, one for each match, in order, A matching set of Descriptors for L, and 3dPts, one for each match, in the same order.

Everything after the first matches the new L to the old L, and filters by those matches. Then those get used for solvePnp.

After the first frame you have to adjust the 3dPoints, because triangulate assumes the camera is the coordinate reference, but we know it isn't, so you have to adjust for that.

edit flag offensive delete link more

Comments

Hi again, and thank you! So this should work?

for (cv::DMatch& m : match)
                {
                    cv::Point3f pos = objectPointsTri[m.trainIdx];
                    cv::KeyPoint img = keyPntsCurrent[m.queryIdx];

                }
antithing gravatar imageantithing ( 2017-07-07 05:22:32 -0600 )edit

..I have edited the question to add the new code... If I run once, and use the results in solvePnp, it looks great. It is the following frames, where i match the previous - current, that are causing me problems.

antithing gravatar imageantithing ( 2017-07-07 05:46:16 -0600 )edit

Hi, more code added, for (hopefully) clarity. If you have a minute to look at it, it would be very much appreciated. Thank you again.

antithing gravatar imageantithing ( 2017-07-08 11:36:28 -0600 )edit

You also need to filter the descriptors the same way you do the key points. So you're removing key points, but not keeping track of which keypoint matches which descriptor. If you also remove the descriptors at the same time, and then match the smaller list against new frames, that should work.

Tetragramm gravatar imageTetragramm ( 2017-07-08 17:20:01 -0600 )edit

Ah, that makes sense. So in that same loop, something like:

   for (cv::DMatch& m : match)
            {
                cv::Point3f pos = objectPointsTri[m.trainIdx];
                cv::KeyPoint img = keyPntsCurrent[m.queryIdx];

                objectPointsGood.push_back(pos);
                keyPntsGood.push_back(img);

                cv::Mat newDesc;
               newDesc.at<float>(m) =  descPrevious.at<float>(m.trainIdx);

            }

Or is there a smarter way?

Thank you yet again!

antithing gravatar imageantithing ( 2017-07-08 17:53:07 -0600 )edit

No, not that loop. In the frame1 loop, where you do the push_backs. You need to also save the desc that matches the keypoint.

Tetragramm gravatar imageTetragramm ( 2017-07-09 17:41:18 -0600 )edit

Got it. Like this:

for (size_t i = 0; i < matchR.size(); i++)
            {
                keyPntsGoodL.push_back(keyPntsCurrent[matchR[i].queryIdx]);
                keyPntsGoodR.push_back(keyPntsCurrentR[matchR[i].trainIdx]);

                cv::Mat tempDescriptor(descCurrent.rows, descCurrent.cols, descCurrent.depth());
                descCurrent.row(i).copyTo(tempDescriptor.row(count));
                count++;

                //copy tempDescriptor to the return
            }

Does that look better? Do I need to copy cols as well as rows here? Adding this seems to work as expected, but my matches are still wrong in the second frame loop! What else could be happening here? Thank you yet again for your time.

antithing gravatar imageantithing ( 2017-07-10 01:17:34 -0600 )edit

.. solvePnp results jump around like crazy, and if I draw the matches, they look like this:

bad matches

antithing gravatar imageantithing ( 2017-07-10 01:40:29 -0600 )edit

So you do realize, that each iteration of the loop sees a new copy of the tempDescriptor Mat? You need to declare that outside the loop.

Tetragramm gravatar imageTetragramm ( 2017-07-11 17:54:01 -0600 )edit

The frame 1 loop only runs once, at the start of the application, then tempDescriptors is copied to descPrevious which is declared outside of all loops. The 'frame2' loop is every frame from then on. I have updated my question with the entire function, for complete clarity. Sorry to take up your time, but I just cannot figure this out. I must be missing something.. Thank you yet again for your patience and time.

antithing gravatar imageantithing ( 2017-07-11 21:28:31 -0600 )edit

Question Tools

2 followers

Stats

Asked: 2017-07-05 04:32:26 -0600

Seen: 1,980 times

Last updated: Jul 14 '17