amirbar / speech2gesture

code for training the models from the paper "Learning Individual Styles of Conversational Gestures"
361 stars 41 forks source link

Output is only predicted pose plots #4

Closed deepconsc closed 4 years ago

deepconsc commented 4 years ago

After running inference, the output is only predicted pose plots, not synthesized video.

Screen Shot 2019-08-06 at 06 13 56
amirbar commented 4 years ago

Hi @deepconsc,

Synthesizing videos was not a major part of this work and was a focus of different work (https://carolineec.github.io/everybody_dance_now/).

I think there should be some open source implementations of that: https://github.com/search?o=desc&q=everybody+dance+now&s=stars&type=Repositories

deepconsc commented 4 years ago

Totally understand. Me and my team are doing some research for end2end speech to sign language conversion/generation. This is huge inspiration. You and your team are doing really creative work, with creative approach. Thank you for the work!

amirbar commented 4 years ago

Thanks :)