Closed Sirius083 closed 5 years ago
I don't think these two will make much difference.
Let me know if this detail makes a significant change in performance. If that is true, I will re-rewrite the data pre-processing after nips.
This is the results I got under tensorflow implementation of sparseNet under densenet preprocessing implementation. I didnot get same accuracy as as densenet as well(1% accuracy lower), maybe this is due to tensorflow and pytorch optimization difference. I will have a further look into that Thank you very much
I don't think these two will make much difference.
Thanks for your reply, I finnaly found it is because tensorflow implementation did not include the L2 normal of batch normalization parameters weight decay in tensorpack implementaion. After that, the model give better results.
Can you post your pytorch implementation of data augmentation/training details? (Is it same with that in tensorflow?) Since the data augmentation is per-channel-mean while the tensorflow implementation is per-pixel-mean Thanks in advance