Hey, I want to better understand how the results of solvePnP are affected by image input. If I'm displaying a 1080 x 2220 camera image to the user in an AR app do I need to send the same image to opencv to run through openCV or can I send a much lower resolution version for speed reasons? I'm having troubles with translation returned by solvePnP and I get the impression this is due to the fact I send a 480x640 image to openCV. Would I need to scale the resulting pose matrix in same way to align the calculation with the higher resolution camera image in the app?
My object in Unity appears to always position itself in the bottom left of the marker. If the marker is rotated in 90 degree increments it rotates itself around the centerpoint of the marker to find the bottom left.