Context: I have a big 2mx2m arena on which 4 aruco markers are printed, their position from a corner of arena is known and is fixed. Now I have another aruco marker on a robot moving over this arena. PS the position known are in 2d.
Problem: I want to find the position of robot in the arena (wrt to the known corner of arean).I am using python for the same, first detecting markers from the image using DetectMarker() then estimating the pose of markers. The tvec values returned by the pose estimation function gives the position of marker wrt to the camera coordinate system, that works fine if the camera is perpendicular to the arena but when the camera is kept at an angle then there is a large error in the position.
Is my approach right? Consider the camera is calibrated well, what is the source of error?
rvec, tvec, _ = aruco.estimatePoseSingleMarkers(corners, actual_size, mtx, dist)
cv2.imshow('img',img)
index = np.where(ids==0) # getting the known aruco marker
rotation_matrix[:3, :3], _ = cv2.Rodrigues(rvec[index]) # Computing Rotational Matrix
for i in range(3):
rotation_matrix[i][3]= tvec[index][0][i] # Adding Translation Values to it
inverse_rot = np.linalg.inv(rotation_matrix) # Inversing the matrix
for i,j,k in zip(ids,tvec,rvec):
print(i,'POS:',j) # prints id and tvec values
pt[:3] = j.reshape(3,1)
rot_point = np.dot(inverse_rot,pt) # Homogeneous matrix . tvec values
print(rot_point[:3]) # The new position
print(np.sqrt(rot_point[0]**2 + rot_point[1]**2 )) # Distance
rotational_matrix is 4x4 matrix containing rotation, translation, which is used to transfer the coordinate system(from camera system) to one of the markers(known marker on arena, so that marker becomes the origin), and converting points (tvecs in the camera system) to the marker system