BloodAxe / pytorch-toolbelt

PyTorch extensions for fast R&D prototyping and Kaggle farming
MIT License
1.52k stars 122 forks source link

Dice loss is smaller when computed on entire batch #99

Open AlessandroGrassi1998 opened 9 months ago

AlessandroGrassi1998 commented 9 months ago

🐛 Bug

I noticed that when I compute the dice loss on an entire batch the loss is smaller than computing it singularly for each sample and then averaging it. Is this behavior intended?

Expected behavior

Dice loss on batch equivalent to average of dice losses

Environment

Using loss from segmentation_models_pytorch