bmsookim / wide-resnet.pytorch

Best CIFAR-10, CIFAR-100 results with wide-residual networks using PyTorch
MIT License
462 stars 129 forks source link

Unable to reproduce the accuracy of WRN-28-10 on Cifar-100 #1

Open wishforgood opened 6 years ago

wishforgood commented 6 years ago

I git cloned the code and ran it with the command suggested by readme. However, the Top1 acc stopped at 76% after 160 epochs. I've seen the learning curve in the paper, and found that my model failed to reach 65% acc before 60 epochs. Instead, it just got around 6% lower. Could you please give some suggestion on debugging?

bmsookim commented 6 years ago

Hi, thanks for visiting my repository.

Can I get details about your configuration, such as the meanstd value and dropout rate you've adopted during training?

That will help me a lot in looking into the problem. Thanks :)

Sincerely, Bumsoo Kim

      1. 오후 6:36에 "wishforgood" notifications@github.com님이 작성:

I git cloned the code and ran it with the command suggested by readme. However, the Top1 acc stopped at 76%. I've seen the learning curve in the paper, and found that my model failed to reach 65% acc before 60 epochs. Instead, it just got around 6% lower. Could you please give some suggestion on debugging?

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/meliketoy/wide-resnet.pytorch/issues/1, or mute the thread https://github.com/notifications/unsubscribe-auth/AQ5nm10kXfvez1TK0Ygv2KcGBd5CjBePks5tBOC4gaJpZM4REmV1 .

wishforgood commented 6 years ago

I haven't changed the meanstd value and dropout rate. The meanstd value is (0.5071, 0.4867, 0.4408), (0.2675, 0.2565, 0.2761). The dropout rate is 0.3. Both are just as you configured in the code. The learning rate scheme are also kept unchanged. I used a Tesla K40c to run the code. For every epoch, it took me 10 min, which is quite strange.

bmsookim commented 6 years ago

I'll run the code tonight and try to figure out if anything is wrong within the code.

Thanks for letting me know :) I'll reply soon !

wishforgood commented 6 years ago

That will be great, thank you very much!

wishforgood commented 6 years ago

Hi, could you please show me your training curve?

bmsookim commented 6 years ago

Hi, sorry for the late response.

I've tested my model out for multiple GPUs (2 Titan X's) and a single GPU (a single GTX1070) for the last two days.

To cut into the result, the best accuracy after 200 epochs reached 79.73% and 80.05% each.

If you need specific logs for the training process, I'll start training a new model right away and will upload the training log in a seperate folder.

Since you didn't have changed any configurations within the repository, I'll double check the model 5 more times with various environments. As each training takes about 15 hours (for a single GPU), it will unfortunately take some time. Will that be OK for you?

Sincerely, Bumsoo Kim

wishforgood commented 6 years ago

Thanks very much! Really appreciate it! It's OK I can wait. I will also git clone and run it again to make sure I do follow the configuration.

wishforgood commented 6 years ago

I have run the default configuration again and confirmed that I can't reproduce the reported acc.

bmsookim commented 6 years ago

Hi, I've finally confirmed the result. I attached the log in the form of a text file. The final results are 80.46% in accuracy, and I think it corresponds the returned accuracy. May I see the log for your training and validation results? wide_resnet_log.txt

wishforgood commented 6 years ago

Sorry I forgot to save my log, but I do see that before epoch 121 everything is almost the same with yours. But after that the acc just stopped at 76%. I will have to run it again to show you my log. I will check it more carefully to explore where the problem is, so could you please wait a few days? Really thank you for your log!

bmsookim commented 6 years ago

Of course! Take your time :) I will appreciate a lot if you point out any inconsistencies or inconveniences within my code. I have a lot to learn about Pytorch, so any kind of recommendation will help me a lot.

Thanks.

wishforgood commented 6 years ago

log.txt

wishforgood commented 6 years ago

Hi, here is my training log. Is there any problem with it?

bmsookim commented 6 years ago

Hi, I've looked into the log, and seems like you did everything right.

I recently am going through all the codes again since it has been a while. I also have a Torch version of this code, so I will look into everything that might go wrong and will hopefully give you an answer.

Thanks for your patience :)

wishforgood commented 6 years ago

Hi, have you found any possible cause of this bug?

bmsookim commented 6 years ago

Hi, I found out that the problem might be caused by the Dropout function. Currently looking into it, and it seems to show unusual fluctuations compared to the same code with Torch.

bmsookim commented 6 years ago

Hi, sorry for the late response.

As a matter of fact, I found out that the 'Dropout' was unable to represent the performance that I've tested out in Torch.

I'm figuring this out, and will let you know as soon as I update the code.

Sorry for all the trouble. Thank you very much.

Sincerely, Bumsoo Kim

2018-01-17 18:13 GMT+09:00 wishforgood notifications@github.com:

Hi, have you found any possible cause of this bug?

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/meliketoy/wide-resnet.pytorch/issues/1#issuecomment-358242676, or mute the thread https://github.com/notifications/unsubscribe-auth/AQ5nm6HjWJSXW4ZpcuXE3v1ZUCGtU6NVks5tLbmigaJpZM4REmV1 .

wishforgood commented 6 years ago

It's OK, take your time.

wronnyhuang commented 6 years ago

Hi Bumsoo, we're trying to get a pretrained CIFAR100 net to use for our research. Would you be willing to upload your parameters to the github? Thanks!

morawi commented 6 years ago

@wishforgood Maybe you need to reduce the training batch size (in the config file) from 128 to something small,like 32, if your GPU memory is much lower than the Titan X.

bmsookim commented 6 years ago

@wronnyhuang Will do shortly :( Sorry for everyone that its taking such a long time

fartashf commented 6 years ago

The problem is probably with using self.dropout the same way for both train and eval. Typically people use F.dropout in the forward function and pass self.training as an argument.

I was able to reproduce using this code that uses F.dropout: https://github.com/xternalz/WideResNet-pytorch/blob/master/wideresnet.py.

hgjung3 commented 6 years ago

@fartashf I agree with you. After modifying the codes related to Dropout, I got 80% for the top-1 accuracy.