Walter0807 / MotionBERT

[ICCV 2023] PyTorch Implementation of "MotionBERT: A Unified Perspective on Learning Human Motion Representations"
Apache License 2.0
1.06k stars 131 forks source link

Question about azimuth #76

Open XALEX-123 opened 1 year ago

XALEX-123 commented 1 year ago

https://github.com/Walter0807/MotionBERT/assets/84762994/f486a806-0f78-4a1f-a1ae-29ff6cda0b9e

https://github.com/Walter0807/MotionBERT/assets/84762994/63845511-a55e-4f00-9e1e-dc4754cbe352

Hello, thank you for this amazing work! I have some question about azimuth, I use a indoor bike riding as input, and if the azimuth=70, it seems normal, however, once I change azimuth to 0 that I want to display it in front view, the body truns out become a little crooked. Is that normal? If not how can I adjust it? Sorry about bothering you, Sincerely ask.

BiomechatronicsRookie commented 1 year ago

I experienced a similar situation and was thinking of opening an issue too!

Walter0807 commented 1 year ago

Hi, I did not notice this before. Which model are you using? It seems like the global orientation is somewhat wrong.

XALEX-123 commented 1 year ago

I use FT_MB_release_MB_ft_h36m

Walter0807 commented 1 year ago

How about other models (FT_MB_lite_MB_ft_h36m_global_lite)? Also, you can try interactive visualization where you can change the view-angle arbitrarily.

XALEX-123 commented 1 year ago

yeah I also tried global lite version but somehow it's still happened. What is interactive visualization ? Could you clarify clearly for me?

Walter0807 commented 1 year ago

You can visualize the 3D pose for a single frame using matplotlib and interact with it with the mouse in a GUI (maybe with the help of x11, mobaxterm, or jupyter notebook depending on your OS).

XALEX-123 commented 1 year ago

OK I get it, thank for your answering. By the way, i'm not being rude, but that issue metioned above can't be solved right away right? Or is there some guess or hint about this issue, like where can I find the global orientation?

Walter0807 commented 1 year ago

Sorry I haven't got time to check that yet, maybe you can try interactive visualization first to see what happened. Trying different video samples might also be helpful to understand the issue.

sandstorm12 commented 8 months ago

I am having the same problem. However, instead of extracting the 3D joints using the infer_wild.py script, I extracted the SMPL joints from the infer_wild_mesh.py script. In my use case it seems correct.

You can extract the joint positions by adding this line of code at the end of infer_wild_mesh.py script:

np.save(osp.join(opts.out_path, 'joints.npy'), reg3d_all)