Open jiequancui opened 4 years ago
Same question. A huge gap between the reproduced and the reported results.
I use the default cifar10.yaml, cifar100.yaml files with imbalance ratio 100 and train from scatch with this codebase. however, I only get 76.58% on CIFAR10 and 41.56% on CIFAR100, which is far from 79.82% on CIFAR10 and 42.56% on CIFAR100 in the paper. So, if there are any differences between the training setting in this codebase and the training setting corresponding to paper results on CIFAR ?
There is no any difference between the settings in our codes and the settings reported in the paper. I have re-run my experiments on CIFAR-10-IM100 and CIFAR-100-IM100 and achieved even higher results than reported in the paper. Have you kept the settings or environments same with us? Such as torch version, torchvision version, cuda version, cudnn version, etc. I have updated our README.md and added the Environmental settings section. Hope it will be helpful to you.
Now, I use torch1.01 and torchvision 0.2.2_post3 with cuda10.0 and I achieved 78.50% on CIFAR-10-IM100, 42.31% on CIFAR-100-IM100. I think the "Environmental settings" may be the key issue. Thank you for your reply.
Same question. The enviroment on my server is torch 1.0.1,torchvision 0.2.2.post3 with cuda 10.0.130 and I achieved 82.08% on CIFAR-10-IM50, 46.34% on CIFAR-100-IM50, which are lower than those in your paper.. Thank you for your reply.
Same question. The enviroment on my server is torch 1.0.1,torchvision 0.2.2.post3 with cuda 10.0.130 and I achieved 82.08% on CIFAR-10-IM50, 46.34% on CIFAR-100-IM50, which are lower than those in your paper.. Thank you for your reply.
The CUDA nad CUDNN version is 9.0 and 7.1.3 respectively in our experiments. Hope that will be useful for you.
Is there anyone solve the reproduced problem? We still cannot solve this problem.
I use the default cifar10.yaml, cifar100.yaml files with imbalance ratio 100 and train from scatch with this codebase. however, I only get 76.58% on CIFAR10 and 41.56% on CIFAR100, which is far from 79.82% on CIFAR10 and 42.56% on CIFAR100 in the paper. So, if there are any differences between the training setting in this codebase and the training setting corresponding to paper results on CIFAR ?