Shaoli-Huang / SnapMix

SnapMix: Semantically Proportional Mixing for Augmenting Fine-grained Data (AAAI 2021)
129 stars 25 forks source link

What's the purpose of adding up lam value when the two images have same label during mixing? #3

Closed mrxuehb closed 3 years ago

mrxuehb commented 3 years ago

In the code, if image A is mixed with image B, and A and B have same label, their lam value will be summed up, while calculating loss also seperately. This is my understanding from the code while I did not found anything about it neither in snapmix paper nor cutmix code implementation. Would you help me with understanding these code and why 'same_label' should be treated like this: lam_a[same_label] += lam_b[same_label] lam_b[same_label] += tmp[same_label]

Many thanks.

Shaoli-Huang commented 3 years ago

Thanks for raising this interesting question. These few lines of code are mainly to increase the weight of mixed images of the same category. There are two reasons for this. The first is that the label noise level of mixing the same-class images is relatively low. Therefore we can increase the weights of these samples to facilitate training, as we think they are cleaner than those produced by mixing different-class images. Second, the probability of mixing the same-class images might be very small, but intuitively, mixing the same-class images may play a better role in augmenting data for the dataset that contains limited training data for each class (such as CUB dataset, the number of training images per class is only about 32). However, in our experiments, this step only provides a slight improvement. That is why we did not emphasize the above-mentioned points in our main paper.

mrxuehb commented 3 years ago

Thanks for your reply. It is quite reasonable and clear.