Hi, hpc203,I encountered such a problem with the .pth file I trained when I try to run convert_onnx.py.Do you have any solutions?
Here is the feedback:
Traceback (most recent call last):
File "convert_onnx.py", line 23, in
net.load_weights(trained_model)
File "/home/fido/Desktop/yolact-opencv-dnn-cpp-python/convert-onnx/yolact.py", line 434, in load_weights
self.load_state_dict(state_dict)
File "/home/fido/anaconda3/envs/yolact/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1224, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for Yolact:
size mismatch for prediction_layers.0.conf_layer.weight: copying a param with shape torch.Size([6, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([243, 256, 3, 3]).
size mismatch for prediction_layers.0.conf_layer.bias: copying a param with shape torch.Size([6]) from checkpoint, the shape in current model is torch.Size([243]).
size mismatch for semantic_seg_conv.weight: copying a param with shape torch.Size([1, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([80, 256, 1, 1]).
size mismatch for semantic_seg_conv.bias: copying a param with shape torch.Size([1]) from checkpoint, the shape in current model is torch.Size([80]).
Hi, hpc203,I encountered such a problem with the .pth file I trained when I try to run convert_onnx.py.Do you have any solutions?
Here is the feedback: Traceback (most recent call last): File "convert_onnx.py", line 23, in
net.load_weights(trained_model)
File "/home/fido/Desktop/yolact-opencv-dnn-cpp-python/convert-onnx/yolact.py", line 434, in load_weights
self.load_state_dict(state_dict)
File "/home/fido/anaconda3/envs/yolact/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1224, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for Yolact:
size mismatch for prediction_layers.0.conf_layer.weight: copying a param with shape torch.Size([6, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([243, 256, 3, 3]).
size mismatch for prediction_layers.0.conf_layer.bias: copying a param with shape torch.Size([6]) from checkpoint, the shape in current model is torch.Size([243]).
size mismatch for semantic_seg_conv.weight: copying a param with shape torch.Size([1, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([80, 256, 1, 1]).
size mismatch for semantic_seg_conv.bias: copying a param with shape torch.Size([1]) from checkpoint, the shape in current model is torch.Size([80]).