Closed hetul-patel closed 5 years ago
Thank you for your comment. The reason for the error is due to use of DataParalell in pytorch. Refer this link for more detail. I fixed the problem by creating a new ordered dict witout the 'module' prefix.
@YoungminBaek when i load craft_refiner_CTW1500.pth ,i have the same error, i use the lastest code,can you tell me why? thanks
Issue was generated while running test.py script with --cuda=False. Key names mismatched while loading pretrained weight with pytorch-cpu version. Resolved temporarily by adding net=torch.nn.DataParallel(net) in test.py incase args.cuda=False
Hi,I met the same problem as you, but when I used your method, it still didn't work.The error is Missing key(s) in state_dict: "basenet.slice1.0.weight", "basenet.slice1.0.bias"... and Unexpected key(s) in state_dict: "last_conv.0.weight"... Thanks!
@YoungminBaek when i load craft_refiner_CTW1500.pth ,i have the same error, i use the lastest code,can you tell me why? thanks
I think the pretrained model craft_refiner_CTW1500.pth is for refiner_model, the trained model is still craft_mlt_25k.pth. In all, I use the following command and it works fine:
python test.py --trained_model=craft_mlt_25k.pth --refiner_model=craft_refiner_CTW1500.pth --test_folder=*** --refine
Issue was generated while running test.py script with --cuda=False. Key names mismatched while loading pretrained weight with pytorch-cpu version. Resolved temporarily by adding net=torch.nn.DataParallel(net) in test.py incase args.cuda=False