genea-workshop / Speech_driven_gesture_generation_with_autoencoder

This is the official implementation for IVA '19 paper "Analyzing Input and Output Representations for Speech-Driven Gesture Generation".
https://svito-zar.github.io/audio2gestures/
Apache License 2.0
9 stars 8 forks source link

gesture motion data preprocessing #3

Open ujemd opened 1 year ago

ujemd commented 1 year ago

Hello, Thanks for the amazing work with the dataset and visualization tools.

I have a question regarding the preprocessing pipeline of the bvh data. I ran a little experiment with the pipeline for upper body data by only using the jointSelector (upper body, no fingers) and Numpyfier components of the processing pipeline and then inverting it. And I noticed that the rendered animation of before and after applying this pipeline is quite different, it seems the rendered animation after applying and inverting the data pipeline has a more exaggerated motion and looks quite unnatural.

Does anyone perhaps know why this happens?

Thanks again.

Best, David

PD: I could attach some videos if you like.