Closed LaureAcin closed 5 years ago
Are you using the same config file for training and evaluation? It looks like the model you are loading isn't the one you trained.
The checkpoint path for training is here. You could check that for evaluation, cfg.training.resume in your config file contains the same checkpoint path.
Closing due to inactivity.
Hi, Jimmy.
I use my own dataset which contains one object and I trained it, I had problems because my images hadn't the same size as others images used for models pretained, so I changed the train code and it worked. But now I have problems when I try to use eval.ipynb, at the line model.load_state_dict(checkpoint['state_dict']) I have this error : RuntimeError: Error(s) in loading state_dict for DataParallel: size mismatch for module.fc1.weight: copying a param of torch.Size([256, 40960]) from checkpoint, where the shape is torch.Size([256, 86528]) in current model. size mismatch for module.fc_p1.weight: copying a param of torch.Size([15, 256]) from checkpoint, where the shape is torch.Size([3, 256]) in current model. size mismatch for module.fc_p1.bias: copying a param of torch.Size([15]) from checkpoint, where the shape is torch.Size([3]) in current model. size mismatch for module.fc_o1.weight: copying a param of torch.Size([20, 256]) from checkpoint, where the shape is torch.Size([4, 256]) in current model. size mismatch for module.fc_o1.bias: copying a param of torch.Size([20]) from checkpoint, where the shape is torch.Size([4]) in current model.
I don't understand how I could change these sizes.
Thanks in advance.