I have a calibrated camera, and an application that tracks Aruco markers using opencv 3.2.
What I am trying to do is Map the locations of the markers as they enter camera frame, in real time. So, for example, i have three markers.
In frame 1, I can see marker A. I want this to be located at 0,0,0 in world space. I move the camera and can now see marker A and marker B. I move the camera again, and can now see marker B and marker C, but no longer marker A.
What I want to do, is get the location and rotation of each marker in world space, relative to each other. It seems like I should have enough information to do this, but I am struggling to work out the workflow.
What I am doing is:
Keeping a vector of Markers, and keeping track of which I have seen before and which i have not. As a new marker comes in to view, I store the rvec and tvec and flip it from camera pose to get the marker location in world space.
This alone does not do what I need. I assume that i need to somehow 'chain' the marker locations to get the world-space location of the new ones as the camera moves. is this correct? If anyone could give me a workflow, or steps to follow here, it would be very much appreciated.
thank you.