Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

asked 2019-02-23 11:47:08 -0600

Nbb gravatar image

3D Pose/Orientation Estimation (non-programming)

I am experimenting on trying to detect a person's orientation given his 3D pose and was hoping to get some help here (non-programming related).

This work using the Humans 3.6m dataset https://github.com/una-dinosauria/3d-pose-baseline generates the 3D pose from a 2D pose using neural networks. The output is then transformed from world coordinates to camera coordinates via the camera parameters. I can easily get his orientation (direction that he is facing) via cross-product.

I now have a dataset where there is no information at all regarding the camera parameters and was wondering how I can go about doing it. Can I simply use the camera parameters from the Humans 3.6m dataset? Will it completely skew/distort my output? Note that I only require the orientation so I am not concerned with errors in the distance of pedestrian from the camera.