Daniil-Osokin / lightweight-human-pose-estimation-3d-demo.pytorch

Real-time 3D multi-person pose estimation demo in PyTorch. OpenVINO backend can be used for fast inference on CPU.
Apache License 2.0
656 stars 138 forks source link

3D coordinate calibration #45

Closed Fan-loewe closed 3 years ago

Fan-loewe commented 3 years ago

Hi Daniil,

me again ;) I encountered some problems with 3D calibration. I used ROS camera calibration node based on http://wiki.ros.org/camera_calibration/Tutorials/MonocularCalibration. Then I imported R and T matrices into --extrinsics parameters.

I obtained 3D poses of left shoulder and right shoulder at keypoints 3 and 9. left shoulder: [-72.70494 27.558527 -7.819101] right shoulder: [-56.104603 18.041153 -10.176827] Since the line of two shoulders is parallel to one axis, which means the numbers of one coordinate should be very similar, but the number of three axises look all quite different. Do you have any idea what might be the reason?

Considering of calibration, is there also somewhere in the code for the distorted parameters?

One more question is about the axis direction. In the line of 107 in the demo. py, why do we need to change the direction and sequence of the axis? What do the axises look like after such transition?

Thanks a lot for your help! Best, Fan

Daniil-Osokin commented 3 years ago

As the first step try to compare 3D values with default extrinsics and new one. I believe they should have similar module, this can check if extrinsics formats correspond to each other. We need to swap axes due to different coordinate systems used in annotation and for visualization. You can select the actual axis you need by passing coordinates, e.g (0, 0, 1), into drawing code and see where point with such coordinates will be drawn.

Fan-loewe commented 3 years ago

Hi Daniil, thank you for your reply! I found the axes' directions. ;)

I still have some problem with the extrinsic. I guess it might due to the different image size when I do the calibration.

I noticed there is only height_size and fx in the demo.py, which means we assume the image is square. What if the height_size differs weight_size? For example, I have height with 480 and weight with 640, can I still use the pre-trained model human-pose-estimation-3d.pth, or do I need to retrain my own model?

Thanks for your help!

Daniil-Osokin commented 3 years ago

Hi! Extrinsics are used only for visualization, network accepts images captured under different angles. Usually fx ~ fy, so for simplicity one value is used. Image aspect ratio can be any. Default extrinsics correspond to HD camera with 0 id from CMU Panoptic Dataset, may be it can help somehow.

Fan-loewe commented 3 years ago

Hi Daniil, I wanted to examine the 3D pose accuracy. So I used Realsense camera and obtained the real depth of the joint. However, the depth from pose estimation is ~10 cm less than the depth from the realsense camera.

I checked the issue #42. But I found not only fx, but also extrinsic parameters matter the 3D joint of human pose. When I use the default parameter, the result looks more reasonable. I am wondering, how do I determine the extrinsic parameter? Can I simply apply the default extrinsic to other cameras?

One more question about the code, could you tell me where you minimized 3D to 2D projection error to find the root position in 3D space? Thanks !!

Daniil-Osokin commented 3 years ago

The code which finds the translation is here. Discussion is drag on little bit regarding the first two questions, may be it is better to have a talk (if so, just mail me at gmail, will schedule a meeting)?

Fan-loewe commented 3 years ago

Hi Daniil, thank you. This is my email, fanwu333@gmail.com. Looking forward to hearing from you.

Regards, Fan

Daniil-Osokin notifications@github.com schrieb am Di., 17. Nov. 2020, 20:50:

The code which finds the translation is here https://github.com/Daniil-Osokin/lightweight-human-pose-estimation-3d-demo.pytorch/blob/master/modules/parse_poses.py#L124-L135. Discussion is drag on little bit regarding the first two questions, may be it is better to have a talk (if so, just mail me at gmail, will schedule a meeting)?

— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/Daniil-Osokin/lightweight-human-pose-estimation-3d-demo.pytorch/issues/45#issuecomment-729162754, or unsubscribe https://github.com/notifications/unsubscribe-auth/AKUYVCGWAZODSZLMWNQJY7TSQLHXPANCNFSM4TEEIZQA .