Open AnabetsyR opened 3 years ago
Hi @AnabetsyR, thanks for the questions :smiley: There is a degree to which the training data is harder to classify (since the images are all mixed), but also the accuracy metric is different during training. That's also why it says running_mixup_acc
even with FMix
. You can see the metric (which should really be called msda_acc
or similar) here in torchbearer: https://github.com/pytorchbearer/torchbearer/blob/master/torchbearer/callbacks/mixup.py#L23
It's the same as regular accuracy during validation / testing, but a bit different for training (since there are two partially correct classes during training).
Hope that helps!
@ethanwharris Thank you so much for your great response.
Hi there! Thanks for sharing your work! The paper is very impressive!
I ran the cifar_experiment.sh with cifar data, resnet model, and fmix. However, I'm a little confused about two things: 1- Training accuracy is much lower than validation accuracy. Is this an artifact of using the masks and therefore making the learning phase much harder? Am I missing something? I confess that I'm not 100% sure I understand all the moving parts. 2- When it runs, it says running_mixup_acc etc. I had selected fmix so it's confusing to see mixup here. Can you help me understand this?
Thank you in advance!