Closed lwmming closed 4 years ago
@lwmming Hi, can you show me how you get the wrong result? If you add arg -e
or --evaluate
in train.sh
to test the pre-trained model, the model's classifier may be initialized randomly.
@jiangtaoxie Hi, thank you for your reply. I get the wrong result in the following way: model = mpncovresnet50(pretrained=True) model = torch.nn.DataParallel(model).cuda() model = model.eval() test_acc, test_loss = accuracy(model, test_loader, criterion) It gives 0.166% (top-1 acc). But I also test the performance of Resnet50 in the same way: model = torchvision.models.resnet50(pretrained=True) model = torch.nn.DataParallel(model).cuda() test_acc, test_loss = accuracy(model, test_loader, criterion) I get 76.13% (top-1 acc) which seems correct.
@lwmming Hi, sorry for replying so late. Our models are correct, if you use it for finetuning. However, when evaluating it on ImageNet, the same problem arise. The bugs lie in class Triuvec at MPNCOV.py
, I've optimized this function before to speed up, but causing the order of the output are changed. (inconsistent with training but correct) Now I re-train these models using latest code, you can try and reproduce the results.
@lwmming Hi, sorry for replying so late. Our models are correct, if you use it for finetuning. However, when evaluating it on ImageNet, the same problem arise. The bugs lie in
class Triuvec at MPNCOV.py
, I've optimized this function before to speed up, but causing the order of the output are changed. (inconsistent with training but correct) Now I re-train these models using latest code, you can try and reproduce the results.
Thank you very much!
Why can't I get the desired performance, i.e. error rate 21.71% when testing the released model 'mpncovresnet50-15991845.pth'?