How do I create a 3D map of keypoints from a single camera?
I am working on an object tracking solution that needs to track an object in a scene. I have done research and found that i need to create a 3D map of the keypoints in the scene. So far, i have found a way to collect the keypoints and find the distance from the camera to the point, but how do i use multiple sets of keypoints to map the object in 3D space (like vuforia does). The problem rises when i need to see how much the camera has moved (the angle by which it has moved and its positional movement). How is this solved? Or better yet, can you refer me to a well-documented SLAM code which i can take as a reference in developing my own SLAM based tracking.
good joke ;) (no such thing exists)
@berak No one is willing to share ideas and resources in this community? Ive worked in Web Development and there everyone shares code. I cant seem to find anything solid on SLAM (opencv) aside from the basics.
no, you're expecting this to be a "solved problem", where you can just grab an existing solution, it clearly isn't.
@berak Almost all of the documentation i have read about SLAM contains advanced math and no implementation of the math in code. Also some very critical things are left out - like how to calculate the rotation of the camera in order to see from which position we are looking at the object - is it using the gyroscope or uses math to calculate positions of keypoints in different frames? I dont understand how such a well known thing (SLAM) could be so badly documented. I am not asking for the code, pseudo code will do - just need the list of things i need to code out in order to create it. I understand that monocular SLAM based tracking is not something trivial, but at the same time (being a highschool student) i sometimes think im wasting time looking at math equations i do not understand.