Closed engineer1109 closed 6 years ago
The same format as cityscapes
I have found I can replace the test/model.py with train/model.py to solve it. But why you do this? And my accuracy looks low , Decoder mIOU 0.4864 Encoder mIOU 0.5002, how can l get the mIOU 0.60
See #14 for an explanation about different model files.
How are you evaluating the model? Are you using the CityScapes script or the ones provided in this repo? If you are using the ones provided in this repo, then mIOU will be little less because we do not perform masking operation as done by the CityScapes.
python3 VisualizeResults.py --modelType 1 --p 2 --q 8 --weightsDir ./ True .//decoder/espnet_p_2_q_8.pth Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 514, in load_state_dict ownstate[name].copy(param) RuntimeError: invalid argument 2: sizes do not match at /pytorch/torch/lib/THC/generic/THCTensorCopy.c:101
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "VisualizeResults.py", line 194, in
main(args)
File "VisualizeResults.py", line 149, in main
modelA.load_state_dict(torch.load(model_weight_file))
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 519, in load_state_dict
.format(name, own_state[name].size(), param.size()))
RuntimeError: While copying the parameter named conv.conv.weight, whose dimensions in the model are torch.Size([20, 36, 3, 3]) and whose dimensions in the checkpoint are torch.Size([20, 39, 3, 3]).
I have trained my own models. I found train model self.conv = CBR(19 + classes, classes, 3, 1) test model self.conv = CBR(16 + classes, classes, 3, 1) what is it?