athn-nik / teach

Official PyTorch implementation of the paper "TEACH: Temporal Action Compositions for 3D Humans"
https://teach.is.tue.mpg.de
Other
383 stars 40 forks source link

why the motion seems have very impact foot skating? #1

Closed lucasjinreal closed 1 year ago

lucasjinreal commented 1 year ago

why the motion seems have very impact foot skating?

athn-nik commented 1 year ago

Hey, @jinfagang Note that all the motions that you see are generated, giving as input just text. So, it is not ground truth and no smoothness post-processing is applied. We don't apply any explicit constraint to deal with this, such as foot contact prediction. We could improve this in the future.

lucasjinreal commented 1 year ago

@athn-nik Does there a way to make an option for the generated pose ? I think this awesome! if the pose can be more accurate, it can be very useful!

lucasjinreal commented 1 year ago

also, does it possible to generated some action not in the dataset? (but in the word spaces), like, gen a action cross the walk etc.

athn-nik commented 1 year ago

Hey @jinfagang I didn't really understood the first question. We store the vertices in an npy, from where it can be loaded using numpy. Yes, I have tried some out of the distribution texts and the visualized pairs in the paper and here are not from the training set. But I didn't really push the model's abilities. That is why I am providing an easy demo also.

You can give your own texts and durations and see when and how often it fails 🙂

In general, it works for a lot of actions and even fine-grained directions (eg. right hand, left foot, backwards, forwards etc.)

lucasjinreal commented 1 year ago

@athn-nik Hello! I am trying to driven a mixamo character (not smpl model), using the quaternions output.

but seems the result was not right (mixamo skeleton looks actually diffferent in blender). Do u have any idea?

image

leftone is SMPL, right one is mixamo

athn-nik commented 1 year ago

The skeleton, positions of joints and scale of the character are very different as far as I can I see. I would say that the easier thing to do if you want to visualize that character is either use some retargeting technique for the generated motion or you can set return type here to joints and then get the joint positions or rotations via final_datastruct.rots (not sure on the exact attribute name may it is rots.rots). Now that you will have the joint rotations/positions from SMPL you can define a mapping between the SMPL joints and the Mixamo joints and that should be enough to get your characted moving..

lucasjinreal commented 1 year ago

@athn-nik Hi, I think the bone length only affected the motion likeness, but as you can see, the bone orientation is totally different, this is not likeness, but motion totally wrong when applied to mixamo.

I can now convert mixamo model same as SMPL and it now looks normal.

But what i want, is driven more general model like right (avoid not change model to SMPL like all the time). Do u know how to make it?

lucasjinreal commented 1 year ago

The problem here is, the teach model output is based on SMPL, every single quaternion is base on SMPL T-Pose init position, so it's rotation can not directly applied to right bones.

Do u know how to retarget from left to right ones?

athn-nik commented 1 year ago

I think the skeleton of SMPL model might be useful to map the rotations to the correct bones. I hope this helps. SMPL made simple tutorial contains interesting information.