eth-siplab / AvatarPoser

Official Code for ECCV 2022 paper "AvatarPoser: Articulated Full-Body Pose Tracking from Sparse Motion Sensing"
https://siplab.org/projects/AvatarPoser
MIT License
277 stars 46 forks source link

How are the leg poses estimated from only head and hand poses offered, intuitively? #12

Open yd-yin opened 1 year ago

yd-yin commented 1 year ago

Hi, thanks for your really great work!

Just a silly question, for this "full-body pose tracking from sparse motion sensing" task, how are the leg poses estimated, intuitively? I think there could be severe ambiguity of leg poses given one pair of hand & hands poses.

For example, in the demo of Figure 7

I think maybe the temporal information can help eliminate some of the ambiguity. For example, a moving head and swinging arms indicate the person is walking?

But I got generally amazed by the really good alignment of the leg poses.

Thanks a lot!

ghost commented 1 year ago

I have tried to reproduce this paper. In fact, with only head and hand poses, we cannot predict leg poses at all! In addition, in the code inverse kinemetic solver is not used at all! Transformer do not have a very good result to predict human pose (compare with LSTM or CNN).

Recialhot commented 6 months ago

I have tried to reproduce this paper. In fact, with only head and hand poses, we cannot predict leg poses at all! In addition, in the code inverse kinemetic solver is not used at all! Transformer do not have a very good result to predict human pose (compare with LSTM or CNN).

hello,I'd like to ask if you have any papers related to CNN or LSTM (with source code)。thanks