I hope this message finds you well. Firstly, I would like to extend my sincere gratitude for your significant contribution to the field of audio-driven co-speech gesture generation.
However, I have encountered an issue that I am keen to discuss with you. When I was benchmarking beat datasets using your source code and trying to visualize the results, I noticed that your BC, FGD, and Diversity metrics were leading in performance. Moreover, there was an issue I didn't anticipate: jittering in the visualization outcomes. Even after I followed the smooth motion code you provided in Github, the jittering issue remained.
I am reaching out to inquire if there might be additional considerations or adjustments that should be made when working with beat datasets to mitigate this issue. Your insights or any further guidance you could provide would be immensely valuable and greatly appreciated.
Thank you very much for your time and for sharing your expertise.
Hi
I hope this message finds you well. Firstly, I would like to extend my sincere gratitude for your significant contribution to the field of audio-driven co-speech gesture generation.
However, I have encountered an issue that I am keen to discuss with you. When I was benchmarking beat datasets using your source code and trying to visualize the results, I noticed that your BC, FGD, and Diversity metrics were leading in performance. Moreover, there was an issue I didn't anticipate: jittering in the visualization outcomes. Even after I followed the smooth motion code you provided in Github, the jittering issue remained.
I am reaching out to inquire if there might be additional considerations or adjustments that should be made when working with beat datasets to mitigate this issue. Your insights or any further guidance you could provide would be immensely valuable and greatly appreciated.
Thank you very much for your time and for sharing your expertise.
Best regards