victorqin / motion_inbetweening

MIT License
82 stars 10 forks source link

Maya tool #4

Open dj-kefir-siorbacz opened 1 year ago

dj-kefir-siorbacz commented 1 year ago

Hi @victorqin !

Is the Maya tool available for testing/evaluation? Can I somehow use it?

wang-zm18 commented 6 months ago

Hello @dj-kefir-siorbacz , did you complete the visualization on the specified character? Thank you in advance!

dj-kefir-siorbacz commented 6 months ago

Hi. If you mean access to the tool - no I haven't got any response from the author of the repo. You can however load the output .json into Maya with: https://github.com/victorqin/motion_inbetweening?tab=readme-ov-file#visualize-output-motion-in-autodesk-maya You can also convert this .json to, for example, .bvh using a custom script, but that's a lot of work - creating HIERARCHY and then adding MOTION.

dj-kefir-siorbacz commented 6 months ago

Though, from the few examples I checked, the resulting motion inbetweens are not of super high quality and you can only reasonably inbetween 20-30 frames max, so it's nothing special

wang-zm18 commented 6 months ago

Ok, Thanks @dj-kefir-siorbacz . Another question is what is the name of character in the paper demo. I want to use the generated json files to drive the character. Thank you in advance!

dj-kefir-siorbacz commented 6 months ago

"what is the name of character in the paper demo" - you mean the anime girl? No idea :D

wang-zm18 commented 6 months ago

yes, I mean the girl's name. I want to render the specified character (fbh format file). I am not sure the joint order of the lafan dataset. Maybe they are different from joint order in SMPL.

victorqin commented 5 months ago

Sorry, for the late reply. Hope this still helps. The Maya tool and the anime girl model are proprietary, so I am not able to share it.

For the pretrained models, they are mainly for reproducing the metrics shown in the paper. They were trained on maximum length of 30 frames, and evaluated on 5, 15, 30 and 45 frames. This setting is consistent to various papers on motion inbetweening such as "Robust Motion In-betweening". That's probably why you see the quality deteriorates with transitions longer than 30 frames. If you want to generate longer transitions, the model has to be retrained.

The skeleton is from Lafan1 dataset if you inference using the pretrained model. But I don't think you need to know the skeleton in order to visualize the result. The visualization script mentioned above does not care about on what skeleton the model was trained.