Open npapargyr opened 4 years ago
One more: how can we make an inference given the source(input character) .bvh file and choose only the destination character that we want to retarget to? As seen in the demo (same animation bvh file exists to both input and destination character), is it necessary to have the same animation's bvh file for the destination character as well?
Ideally it's possible. But if there is a big difference in T-pose or animation style, as reported in our paper, it could fail. To train it, prepare your own npy file following the instructions and change the character's name in datasets/init.py
It's just for comparison raison. Paired bvh is not necessary.
Thanks for your response, so which script to use so as to give as an input only a bvh file and then just select the character that we want to retarget to?
Haha, that's a good question. We doesn't have one for customized data. You can try to tweak eval_single_pair.py or I'll write a script for that, ideally tomorrow.
I am really looking forward to see that script :)
i also have the same question!
Sorry guys, I'm quite occupied these days. If you are really in a hurry to test, try to rename your character like examples (including std_bvhs and mean_var), replace one of the "paired" bvh with an arbitrary bvh with the same length as the other. It should work.
Thank you for the update Peizhuo! Will check this way and if that attempt fails, then whenever you have time and upload that script we will give it a look.
Sorry guys, I'm quite occupied these days. If you are really in a hurry to test, try to rename your character like examples (including std_bvhs and mean_var), replace one of the "paired" bvh with an arbitrary bvh with the same length as the other. It should work.
Hi guys, thanks for publishing this project, I'm actually really excited about it! :) I read the paper and had a look at the code but I've still got a few questions though:
Is there any followup on this particular issue "retargeting to a custom character" (a script or anything)?
Could you explain this to me please?
Following line in demo.py
example('Aj', 'BigVegas', 'Dancing Running Man.bvh', 'intra', './examples/intra_structure')
gives me this character [['Aj', 'BigVegas'], ['Goblin_m', 'Goblin_m']] in eval_single_pair.py.
If I understand correctly this means the Aj-skeleton is retargeted to the BigVegas-skeleton with the motion being the 'Dancing Running Man'. But what is the Goblin_m for?
What I'm currently trying to do is retarget a random Mixamo character (skeleton and motion) from the demo dataset to a custom humanoid skeleton.
I tried the approach you mentioned above with renaming my custom character like in the examples, my character now being "Kaya":
example('Big Vegas', 'Kaya', 'Baseball Pitching.bvh', 'intra', './examples/intra_structure')
But it throws following error in combined_motion.py line 158
:
RuntimeError: Sizes of tensors must match except in dimension 0. Got 91 and 83 in dimension 1 (The offending index is 1)
I'm not sure if the problem is the length of the bvh file that you mentioned., If so, could you please explain which bvh files need to be the same length (the input and the result bvhs?) and how to actually adjust them?
Looking forward to your answer, thank you very very much!!
@rainer7 @PeizhuoLi Didya find a solution? I'm getting the similar shape mismatch error. Following the same routine as @rainer7. Error below:
File "eval_single_pair.py", line 78, in main
model.load(epoch=20000)
File "/path/deep-motion-editing/retargeting/models/architecture.py", line 288, in load
model.load(os.path.join(self.model_save_dir, 'topology{}'.format(i)), epoch)
File "/path/Retarget-Motion/deep-motion-editing/retargeting/models/integrated.py", line 82, in load
self.auto_encoder.load_state_dict(torch.load(os.path.join(path, 'auto_encoder.pt'),
File "/path/envs/lib/python3.8/site-packages/torch/nn/modules/module.py", line 2041, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for AE:
size mismatch for enc.layers.0.0.mask: copying a param with shape torch.Size([184, 92, 15]) from checkpoint, the shape in current model is torch.Size([192, 96, 15]).
size mismatch for enc.layers.0.0.weight: copying a param with shape torch.Size([184, 92, 15]) from checkpoint, the shape in current model is torch.Size([192, 96, 15]).
@npapargyr What do you mean by arbitrary length? Same quesiton as by @rainer7
@rainer7 @PeizhuoLi Didya find a solution? I'm getting the similar shape mismatch error. Following the same routine as @rainer7. Error below:
File "eval_single_pair.py", line 78, in main model.load(epoch=20000) File "/path/deep-motion-editing/retargeting/models/architecture.py", line 288, in load model.load(os.path.join(self.model_save_dir, 'topology{}'.format(i)), epoch) File "/path/Retarget-Motion/deep-motion-editing/retargeting/models/integrated.py", line 82, in load self.auto_encoder.load_state_dict(torch.load(os.path.join(path, 'auto_encoder.pt'), File "/path/envs/lib/python3.8/site-packages/torch/nn/modules/module.py", line 2041, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for AE: size mismatch for enc.layers.0.0.mask: copying a param with shape torch.Size([184, 92, 15]) from checkpoint, the shape in current model is torch.Size([192, 96, 15]). size mismatch for enc.layers.0.0.weight: copying a param with shape torch.Size([184, 92, 15]) from checkpoint, the shape in current model is torch.Size([192, 96, 15]).
When you download a mixamo data from mixamo website and preprocess using given step, you get bvh file with nore number of joints than one uploaded on drive(you can compare it by printing number of edges or joint topology), which creates problem of different number of joints. Thus mismatch occurs.
Is it possible to train the model combining your.npy files(mixamo characters) with custom rig .npy(new custom character)?