Open nro-bot opened 4 years ago
Hi nouyang, writing the conversion code for the humanoid model took me quite a bit of hacking. (And still doesn't work 100% correctly, although it is usable for some cases.)
I was just looking at the DeepMimic "Lion" model:
It seems to have a lot of joints around the spine and the tail area. That might make it harder. But perhaps I just didn't come up with a correct conversion algorithm yet.
Out of curiosity: do you have example BVH files for the DeepMimic "Lion" or do you plan to make them yourself?
@BartMoyaers Hi!
Ah, hm, do you think the "dog" skeleton actually corresponds to the lion? I assumed it was something different, but admit I haven't run DeepMimic with that flag yet.
I planned on using the BVH files available from Mode Adaptive Neural Netowrk Quadraped Motion Control paper that the authors kindly shared. I imagine the number of joints and everything don't match, so honestly wasn't sure where to start; I was reading through this blog post.
To be honest, I even had difficulty playing the BVH file and relied on something online, and it's sub-optimal because I can't scroll through the mocap and just have to wait, if you have tips on that much appreciated.
I don't have DeepMimic running at the moment (this was for a class that ended) but if I have time I think my next step will be to run the
python DeepMimic.py --arg_file args/play_motion_dog3d_args.txt
python DeepMimic.py --arg_file args/run_dog3d_canter_args.txt
(or pace or trot instead of canter)
and see what the DeepMimic dog model looks like.
(For what it's worth, the lion model comes from https://zivadynamics.com/promos/free where it is complementary mocap with purchase of software) .
It's a bit discouraging / impressive to hear that it took a lot of hacking to do this format conversion, my overall project goal had been to do something similar to SkillsFromVideo ( paper ) but with videos of dogs doing tricks instead of with human acrobats. I was going to use DeepLabCut, so I'd have another! conversion step from that dataformat to BVH in the first place... All really confusing since I'm new to the 3D animation world!
To be honest, I even had difficulty playing the BVH file and relied on something online, and it's sub-optimal because I can't scroll through the mocap and just have to wait, if you have tips on that much appreciated.
Blender can play BVH files really well out of the box, and allows scrolling through frame-by-frame.
I planned on using the BVH files available from Mode Adaptive Neural Netowrk Quadraped Motion Control paper that the authors kindly shared. I imagine the number of joints and everything don't match, so honestly wasn't sure where to start; I was reading through this blog post.
His work is super interesting and I plan to look into it more myself in the future. If the amount of joints don't match though, I recon that it will be quite hard to convert the files to fit the Lion model. Maybe it would make sense to just make a new model that corresponds to the amount of joints in the dog BVH files! (For this purpose you could check out pybullet/bullet3 here. They have a re-implementation of DeepMimic that allows for models to be loaded in using the URDF format.)
I was going to use DeepLabCut, so I'd have another! conversion step from that dataformat to BVH in the first place... All really confusing since I'm new to the 3D animation world!
Everyone is a beginner at some point! (Including myself. ;) ) Best of luck!
Hi! The work in here sounds specific to the humanoid model, I'm wondering how difficult it would be to add the same functionality for the dog model? a BVHtoDogDeepMimic if you will.
https://github.com/xbpeng/DeepMimic/blob/master/data/characters/dog3d.txt
Thanks for the work on this repo, it actually really helped me understand how the DeepMimic repository works as well.