ZcyMonkey / AttT2M

Code of ICCV 2023 paper: "AttT2M: Text-Driven Human Motion Generation with Multi-Perspective Attention Mechanism"
https://arxiv.org/abs/2309.00796
Apache License 2.0
37 stars 3 forks source link

Code update #1

Open chinnusai25 opened 9 months ago

chinnusai25 commented 9 months ago

What an excellent work!

May I ask when the train/eval code and pretrained models of the paper will be made available?

ZcyMonkey commented 6 months ago

What an excellent work!

May I ask when the train/eval code and pretrained models of the paper will be made available?

I apologize for the late response. The pre-trained models and the train/eval process have been updated, please refer to the README file for details. If you have any other questions, feel free to ask them here. However, please forgive me if my replies are sometimes delayed as I have a full-time job.