Closed khawar-islam closed 7 months ago
Thank you for reaching out and sorry for the delayed reply.
IPMix employs JS-divergence to enhance model performance. For additional insights, I suggest consulting AugMix[1].
Indeed, by augmenting images with IPMix for model training, we are able to develop more robust models.
Best,
[1]Hendrycks, Dan et al. “AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty.” ArXiv abs/1912.02781 (2019): n. pag.
Dear @hzlsaber
In training loop, you have implemented KL divergence
" # Clamp mixture distribution to avoid exploding KL divergence"
. what is the reason behind this? What is the benefit of it? In PixMix paper, they didn't implement it, that's why I am asking because there are less method which utilized fractal images.Are you using clean images and augmented images for training?
images_all = torch.cat(images, 0).cuda()
When we apply augmentation then we use augmented data instead of original training data.Regards, Khawar