ZcyMonkey / AttT2M

Code of ICCV 2023 paper: "AttT2M: Text-Driven Human Motion Generation with Multi-Perspective Attention Mechanism"
https://arxiv.org/abs/2309.00796
Apache License 2.0
37 stars 3 forks source link

About the word_emb for cross attention #5

Open buptxyb666 opened 2 months ago

buptxyb666 commented 2 months ago

Thanks for your great work!

I wonder that the length of text usually less than 77, so why not mask the padding tokens in word_emb when performing cross attention?