Open yangbohust opened 2 years ago
run python demo.py -p SSDLite.pth --landmarks
got error too.
python demo.py -p SSDLite.pth --landmarks
C:\anaconda\envs\gestures2\lib\site-packages\torchvision\models\_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
warnings.warn(
C:\anaconda\envs\gestures2\lib\site-packages\torchvision\models\_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=None`.
warnings.warn(msg)
Traceback (most recent call last):
File "C:\Ascend\code\hagrid-master\demo.py", line 204, in <module>
model = _load_model(os.path.expanduser(args.path_to_model), args.device)
File "C:\Ascend\code\hagrid-master\demo.py", line 165, in _load_model
ssd_mobilenet.load_state_dict(model_path, map_location=device)
File "C:\Ascend\code\hagrid-master\detector\ssd_mobilenetv3.py", line 67, in load_state_dict
self.torchvision_model.load_state_dict(torch.load(checkpoint_path, map_location=map_location))
File "C:\anaconda\envs\gestures2\lib\site-packages\torch\nn\modules\module.py", line 1667, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for SSD:
size mismatch for backbone.features.1.0.3.0.weight: copying a param with shape torch.Size([80, 672, 1, 1]) from checkpoint, the shape in current model is torch.Size([160, 672, 1, 1]).
size mismatch for backbone.features.1.0.3.1.weight: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([160]).
size mismatch for backbone.features.1.0.3.1.bias: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([160]).
size mismatch for backbone.features.1.0.3.1.running_mean: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([160]).
size mismatch for backbone.features.1.0.3.1.running_var: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([160]).
size mismatch for backbone.features.1.1.block.0.0.weight: copying a param with shape torch.Size([480, 80, 1, 1]) from checkpoint, the shape in current model is torch.Size([960, 160, 1, 1]).
size mismatch for backbone.features.1.1.block.0.1.weight: copying a param with shape torch.Size([480]) from checkpoint, the shape in current model is torch.Size([960]).
size mismatch for backbone.features.1.1.block.0.1.bias: copying a param with shape torch.Size([480]) from checkpoint, the shape in current model is torch.Size([960]).
size mismatch for backbone.features.1.1.block.0.1.running_mean: copying a param with shape torch.Size([480]) from checkpoint, the shape in current model is torch.Size([960]).
size mismatch for backbone.features.1.1.block.0.1.running_var: copying a param with shape torch.Size([480]) from checkpoint, the shape in current model is torch.Size([960]).
size mismatch for backbone.features.1.1.block.1.0.weight: copying a param with shape torch.Size([480, 1, 5, 5]) from checkpoint, the shape in current model is torch.Size([960, 1, 5, 5]).
size mismatch for backbone.features.1.1.block.1.1.weight: copying a param with shape torch.Size([480]) from checkpoint, the shape in current model is torch.Size([960]).
size mismatch for backbone.features.1.1.block.1.1.bias: copying a param with shape torch.Size([480]) from checkpoint, the shape in current model is torch.Size([960]).
size mismatch for backbone.features.1.1.block.1.1.running_mean: copying a param with shape torch.Size([480]) from checkpoint, the shape in current model is torch.Size([960]).
size mismatch for backbone.features.1.1.block.1.1.running_var: copying a param with shape torch.Size([480]) from checkpoint, the shape in current model is torch.Size([960]).
size mismatch for backbone.features.1.1.block.2.fc1.weight: copying a param with shape torch.Size([120, 480, 1, 1]) from checkpoint, the shape in current model is torch.Size([240, 960, 1, 1]).
size mismatch for backbone.features.1.1.block.2.fc1.bias: copying a param with shape torch.Size([120]) from checkpoint, the shape in current model is torch.Size([240]).
size mismatch for backbone.features.1.1.block.2.fc2.weight: copying a param with shape torch.Size([480, 120, 1, 1]) from checkpoint, the shape in current model is torch.Size([960, 240, 1, 1]).
size mismatch for backbone.features.1.1.block.2.fc2.bias: copying a param with shape torch.Size([480]) from checkpoint, the shape in current model is torch.Size([960]).
size mismatch for backbone.features.1.1.block.3.0.weight: copying a param with shape torch.Size([80, 480, 1, 1]) from checkpoint, the shape in current model is torch.Size([160, 960, 1, 1]).
size mismatch for backbone.features.1.1.block.3.1.weight: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([160]).
size mismatch for backbone.features.1.1.block.3.1.bias: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([160]).
size mismatch for backbone.features.1.1.block.3.1.running_mean: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([160]).
size mismatch for backbone.features.1.1.block.3.1.running_var: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([160]).
size mismatch for backbone.features.1.2.block.0.0.weight: copying a param with shape torch.Size([480, 80, 1, 1]) from checkpoint, the shape in current model is torch.Size([960, 160, 1, 1]).
size mismatch for backbone.features.1.2.block.0.1.weight: copying a param with shape torch.Size([480]) from checkpoint, the shape in current model is torch.Size([960]).
size mismatch for backbone.features.1.2.block.0.1.bias: copying a param with shape torch.Size([480]) from checkpoint, the shape in current model is torch.Size([960]).
size mismatch for backbone.features.1.2.block.0.1.running_mean: copying a param with shape torch.Size([480]) from checkpoint, the shape in current model is torch.Size([960]).
size mismatch for backbone.features.1.2.block.0.1.running_var: copying a param with shape torch.Size([480]) from checkpoint, the shape in current model is torch.Size([960]).
size mismatch for backbone.features.1.2.block.1.0.weight: copying a param with shape torch.Size([480, 1, 5, 5]) from checkpoint, the shape in current model is torch.Size([960, 1, 5, 5]).
size mismatch for backbone.features.1.2.block.1.1.weight: copying a param with shape torch.Size([480]) from checkpoint, the shape in current model is torch.Size([960]).
size mismatch for backbone.features.1.2.block.1.1.bias: copying a param with shape torch.Size([480]) from checkpoint, the shape in current model is torch.Size([960]).
size mismatch for backbone.features.1.2.block.1.1.running_mean: copying a param with shape torch.Size([480]) from checkpoint, the shape in current model is torch.Size([960]).
size mismatch for backbone.features.1.2.block.1.1.running_var: copying a param with shape torch.Size([480]) from checkpoint, the shape in current model is torch.Size([960]).
size mismatch for backbone.features.1.2.block.2.fc1.weight: copying a param with shape torch.Size([120, 480, 1, 1]) from checkpoint, the shape in current model is torch.Size([240, 960, 1, 1]).
size mismatch for backbone.features.1.2.block.2.fc1.bias: copying a param with shape torch.Size([120]) from checkpoint, the shape in current model is torch.Size([240]).
size mismatch for backbone.features.1.2.block.2.fc2.weight: copying a param with shape torch.Size([480, 120, 1, 1]) from checkpoint, the shape in current model is torch.Size([960, 240, 1, 1]).
size mismatch for backbone.features.1.2.block.2.fc2.bias: copying a param with shape torch.Size([480]) from checkpoint, the shape in current model is torch.Size([960]).
size mismatch for backbone.features.1.2.block.3.0.weight: copying a param with shape torch.Size([80, 480, 1, 1]) from checkpoint, the shape in current model is torch.Size([160, 960, 1, 1]).
size mismatch for backbone.features.1.2.block.3.1.weight: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([160]).
size mismatch for backbone.features.1.2.block.3.1.bias: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([160]).
size mismatch for backbone.features.1.2.block.3.1.running_mean: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([160]).
size mismatch for backbone.features.1.2.block.3.1.running_var: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([160]).
size mismatch for backbone.features.1.3.0.weight: copying a param with shape torch.Size([480, 80, 1, 1]) from checkpoint, the shape in current model is torch.Size([960, 160, 1, 1]).
size mismatch for backbone.features.1.3.1.weight: copying a param with shape torch.Size([480]) from checkpoint, the shape in current model is torch.Size([960]).
size mismatch for backbone.features.1.3.1.bias: copying a param with shape torch.Size([480]) from checkpoint, the shape in current model is torch.Size([960]).
size mismatch for backbone.features.1.3.1.running_mean: copying a param with shape torch.Size([480]) from checkpoint, the shape in current model is torch.Size([960]).
size mismatch for backbone.features.1.3.1.running_var: copying a param with shape torch.Size([480]) from checkpoint, the shape in current model is torch.Size([960]).
size mismatch for backbone.extra.0.0.0.weight: copying a param with shape torch.Size([256, 480, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 960, 1, 1]).
size mismatch for head.classification_head.module_list.1.0.0.weight: copying a param with shape torch.Size([480, 1, 3, 3]) from checkpoint, the shape in current model is torch.Size([960, 1, 3, 3]).
size mismatch for head.classification_head.module_list.1.0.1.weight: copying a param with shape torch.Size([480]) from checkpoint, the shape in current model is torch.Size([960]).
size mismatch for head.classification_head.module_list.1.0.1.bias: copying a param with shape torch.Size([480]) from checkpoint, the shape in current model is torch.Size([960]).
size mismatch for head.classification_head.module_list.1.0.1.running_mean: copying a param with shape torch.Size([480]) from checkpoint, the shape in current model is torch.Size([960]).
size mismatch for head.classification_head.module_list.1.0.1.running_var: copying a param with shape torch.Size([480]) from checkpoint, the shape in current model is torch.Size([960]).
size mismatch for head.classification_head.module_list.1.1.weight: copying a param with shape torch.Size([120, 480, 1, 1]) from checkpoint, the shape in current model is torch.Size([120, 960, 1, 1]).
size mismatch for head.regression_head.module_list.1.0.0.weight: copying a param with shape torch.Size([480, 1, 3, 3]) from checkpoint, the shape in current model is torch.Size([960, 1, 3, 3]).
size mismatch for head.regression_head.module_list.1.0.1.weight: copying a param with shape torch.Size([480]) from checkpoint, the shape in current model is torch.Size([960]).
size mismatch for head.regression_head.module_list.1.0.1.bias: copying a param with shape torch.Size([480]) from checkpoint, the shape in current model is torch.Size([960]).
size mismatch for head.regression_head.module_list.1.0.1.running_mean: copying a param with shape torch.Size([480]) from checkpoint, the shape in current model is torch.Size([960]).
size mismatch for head.regression_head.module_list.1.0.1.running_var: copying a param with shape torch.Size([480]) from checkpoint, the shape in current model is torch.Size([960]).
size mismatch for head.regression_head.module_list.1.1.weight: copying a param with shape torch.Size([24, 480, 1, 1]) from checkpoint, the shape in current model is torch.Size([24, 960, 1, 1]).
Hello @yangbohust ! Maybe you are using a different version of torchvision pkg. We are using torchvision==0.12.0 from requirements.txt.
Can you please provide a version of torchvision you are using?
I have had the same error with Resnet18
Unexpected key(s) in state_dict: "state_dict", "optimizer_state_dict", "epoch", "config".
All modules are as defined in the requirement
MobileNetV3_large.pth描述的是一个分类模型,应该用demo_ff.py来查看效果。
正确的完整过程如下: 1、新建环境、安装requirements.txt中的所有包 2、下载MobileNetV3_large.pth 放在项目目录下(与demo_ff.py同目录) 3、进入configs文件夹并打开MobileNetV3_large.yaml 4、在该文件中找到
model:
name: MobileNetV3_large
pretrained: False
pretrained_backbone: False
checkpoint: null
5、将其修改为:
model:
name: MobileNetV3_large
pretrained: False
pretrained_backbone: False
checkpoint: MobileNetV3_large.pth
6、执行命令:python demo_ff.py -p configs/MobileNetV3_large.yaml
I run
python demo.py -p MobileNetV3_large.pth
, got error, how can i fix it?