protossw512 / AdaptiveWingLoss

[ICCV 2019] Adaptive Wing Loss for Robust Face Alignment via Heatmap Regression - Official Implementation
Apache License 2.0
395 stars 88 forks source link

The error maybe be about the model #8

Closed longlongvip closed 4 years ago

longlongvip commented 4 years ago

I try to the run sh eval_wflw.sh, but I meet the error: ''' Traceback (most recent call last): File "D:/Face/FaceAlignment/AdaptiveWingLoss-master/eval.py", line 72, in model_ft.load_state_dict(model_weights) File "D:\Anaconda3\envs\fa37\lib\site-packages\torch\nn\modules\module.py", line 845, in load_state_dict self.class.name, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for FAN: size mismatch for l0.weight: copying a param with shape torch.Size([99, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([69, 256, 1, 1]). size mismatch for l0.bias: copying a param with shape torch.Size([99]) from checkpoint, the shape in current model is torch.Size([69]). size mismatch for al0.weight: copying a param with shape torch.Size([256, 99, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 69, 1, 1]). size mismatch for l1.weight: copying a param with shape torch.Size([99, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([69, 256, 1, 1]). size mismatch for l1.bias: copying a param with shape torch.Size([99]) from checkpoint, the shape in current model is torch.Size([69]). size mismatch for al1.weight: copying a param with shape torch.Size([256, 99, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 69, 1, 1]). size mismatch for l2.weight: copying a param with shape torch.Size([99, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([69, 256, 1, 1]). size mismatch for l2.bias: copying a param with shape torch.Size([99]) from checkpoint, the shape in current model is torch.Size([69]). size mismatch for al2.weight: copying a param with shape torch.Size([256, 99, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 69, 1, 1]). size mismatch for l3.weight: copying a param with shape torch.Size([99, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([69, 256, 1, 1]). size mismatch for l3.bias: copying a param with shape torch.Size([99]) from checkpoint, the shape in current model is torch.Size([69]).

''' I think the the model is incomplete, so I downloaded the model again, but I still meet the same error, how can I solve the error?

Thank your better work than before, and I am look forward to you future release!

Zico2017 commented 4 years ago

I met the same issue

longlongvip commented 4 years ago

May you can change the NUM_LANDMARKS to 98,I try it but I meet the new other errors,so you can try it,it might be right!

longlongvip commented 4 years ago

Yes, I successfully do it with NUM_LANDMARKS 98

longlongvip commented 4 years ago

I run it in PyCharm on Windows 10 with Python environment by built Anaconda,I set the batch_size is 2 because the memory of NVIDIA RTX 2060 is 6G,it's so small

protossw512 commented 4 years ago

@longlongvip @Zico2017 Hi, I believe this is caused by the operating system. The script I provided is a bash script, and will only work on the bash terminal. It seems like to me that the args does not pass to python properly. In this case, you can always set args inside python script to avoid this issue.