Open TortoiseHam opened 3 years ago
Thank you for your interest. Do you mind following up with me via email since I don't get Github notifications: samarth.sinha@mail.utoronto.ca? I will run the experiments for myself as well and try to replicate your findings and update + close this issue when needed.
What would happen if we use smoothing as a data augmentation technique during training?
Hi, thanks for the interesting research / technique. I was trying to reproduce it and ran into a potential issue and was wondering if you could help shed some light on whether this is expected behavior. Looking at your training code, it seems that there is no pipeline data augmentation happening (except for a Normalize). When I do the same, I see large accuracy performance gains when using CBS on ciFAIR100 (blue vs orange below). However, looking at the CE graph it seems like the training is severely overfitting. I tried adding a simple 50% horizontal flip data augmentation during training, and the performance gains offered by CBS virtually disappeared (green vs brown). When I added slightly more augmentation (expand&crop + cutout) CBS actually performed worse than normal training (purple vs orange). Any thoughts on whether this is the expected behavior or why this might be happening?