Closed bhomaidan1990 closed 6 months ago
Thanks for pointing this out. This is because we renamed the layer when cleaning the code for release. You could directly replace the "motion_transformer.transformer" with "motion_mlp.mlps" and it should be fine.
I didn't manage to find motion_transformer.transformer
or mlps
in the motion_mlp, can you please refer to that line in the code, thanks.
You could either replace "motion_mlp" with "motion_transformer" in the code; or after loading the model, change the keys of the loaded model "motion_transformer" to "motion_mlp".
I will modify the keys of the model and upload it later today.
You could either replace "motion_mlp" with "motion_transformer" in the code; or after loading the model, change the keys of the loaded model "motion_transformer" to "motion_mlp". I will modify the keys of the model and upload it later today.
If you please can also upload your pre-trained AMAAS model it will be kind of you, thanks in advance.
I added a line to replace keys in state_dict and it worked.
state_dict = {k.replace("motion_transformer.transformer", "motion_mlp.mlps"): v for k, v in state_dict.items()}
Which is after line 108 in baseline_h36m/test.py
model = Model(config)
state_dict = torch.load(args.model_pth)
# line to add
state_dict = {k.replace("motion_transformer.transformer", "motion_mlp.mlps"): v for k, v in state_dict.items()}
model.load_state_dict(state_dict, strict=True)
model.eval()
model.cuda()
but the reuslt is strang... [13.4, 24.0, 47.7, 58.4, 78.6, 92.1, 105.2, 112.0] is so low than paper result QAQ
Following the Readme about Evaluation:
python test.py --model-pth /home/user/git/siMLPe/checkpoints/h36m_model.pth
When loading the pre-trained model I get an error:Missing key(s) in state_dict
Can you please tell me how can I solve that? thanks in advance.