Closed FabianIsensee closed 7 years ago
Hi Fabian, thank you for your interest. During training we used cops of (224, 224) and batch size 3, and for fine tuning no crops and batch size 1. Make sure you use the entire memory of your GPU and try to use the biggest possible crop if it doesn't work. If you don't find the same results, I will try to figure out if we forget something from the original code.
Regards, Simon
Le 12 déc. 2016 18:09, "FabianIsensee" notifications@github.com a écrit :
Hi, first and foremost thank you very much for sharing your code! It is very insightful and I learned a lot from it. Unfortunately when I am trying to reproduce the results of your paper I run into out of memory issues while fine tuning on the whole images (FC-DenseNet103, batch size 3, input dimension (3, 3, 360, 480)). My theano.config.floatX is float32 (you did not mention using float16 and you code also suggests float32). I am using CuDNN 5105 along with cuda 8.0. My GPU is a Pascal Titan X (12GB VRAM). CnMem is disabled. With these settings, I cannot even train the network on a batch size of 2. Did you use a GPU with more VRAM or used a specific theano configuration? Tank you very much! Cheers, Fabian
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/SimJeg/FC-DenseNet/issues/2, or mute the thread https://github.com/notifications/unsubscribe-auth/AMnuVTpdN_FWuDflXBKmCcLV2jISu8BAks5rHX92gaJpZM4LKyaP .
Hi Simon, thank you very much for your quick reply! I was not aware that you lower the batch size to 1 when fine tuning on whole images. That does the trick! One question though (if I may): What is the reasoning behind using batch_norm_update_averages=False and batch_norm_use_averages=False instead of relying on the exponential moving average of the BatchNormLayer? Cheers, Fabian
Hi Fabian, we observed that the network performed worse using moving average statistics than batch statistics during test. That's why we disabled the batch_norm_update_averages during training and batch_norm_use_averages during test
Hi Simon, thank you very much. I was just wondering whether this decision was based on theory or empirical experience. Keep up the great work!
Hi, first and foremost thank you very much for sharing your code! It is very insightful and I learned a lot from it. Unfortunately when I am trying to reproduce the results of your paper I run into out of memory issues while fine tuning on the whole images (FC-DenseNet103, batch size 3, input dimension (3, 3, 360, 480)). My theano.config.floatX is float32 (you did not mention using float16 and you code also suggests float32). I am using CuDNN 5105 along with cuda 8.0. My GPU is a Pascal Titan X (12GB VRAM). CnMem is disabled. With these settings, I cannot even train the network on a batch size of 2. Did you use a GPU with more VRAM or used a specific theano configuration? Tank you very much! Cheers, Fabian