Closed Moj-Dev closed 3 years ago
Hi! Network predicts pose in coordinate system relative to the camera. To transform from camera to world coordinate system pass extrinsics.
I tried both; with and without extrinsic! Also, I commented the "rotate_poses(poses_3d, R, t)" function, but it's still the same. The problem is that the skeleton rotates itself instead of transitional movement.
I just did not get which translation you expect. From the camera view skeleton is really just rotated. Extrinsics just aligns camera coordinate system with the world, so the effect will be the same.
I want to define the human position coordination as an origin and calculate the position of the moving camera. Here is the video sample shows that a human is fixed in the center and a camera moves around it. How can I calculate the relative position and orientation of the camera with respect to human coordination as an origin?
If you have extrinsics at each camera position (while it is rotating) you can find its translation relative to human in the following way:
It will give you camera coordinates in human-centered coordinate system.
Is it possible to calculate without camera extrinsic? Does the algorithm calculate the human orientation with respect to the camera?
No, network predicts pose in coordinate system relative to the camera.
Hope, it is clear now.
In my application, the human stands in a fixed position and a camera rotates around the human. When I run the algorithm, there is no translation movements in 3D visualization. it shows only a skeleton rotating. In other words, the algorithm believes that human rotates around himself. How can I fix this issue?