LIVIAETS / boundary-loss

Official code for "Boundary loss for highly unbalanced segmentation", runner-up for best paper award at MIDL 2019. Extended version in MedIA, volume 67, January 2021.
https://doi.org/10.1016/j.media.2020.101851
MIT License
654 stars 98 forks source link

Is it possible to train the Boundary Loss code on a GPU? #59

Closed gita0326 closed 1 year ago

gita0326 commented 1 year ago

Hi there, I'm currently working with a code that utilizes scipy.ndimage.distance_transform_edt, which needs to run on the CPU. However, I've encountered an issue where transferring data from the GPU to the CPU and then back to the GPU seems to prevent the parameters from being updated. I'm curious to know how you handle this situation in your codebase. Thank you!

HKervadec commented 1 year ago

(Sorry for the delayed reply. Somehow the Github notification for this repo are still broken on my side.)

Yes this is correct, scipy.ndimage.distance_transform_edt. However, note that it is used at the dataloader level, to avoid this issue altogether: https://github.com/LIVIAETS/boundary-loss/blob/master/dataloader.py#L105

The implementation being quite efficient on CPU, this does not slow-down the training at all, and things are ready to be multiplied in the "Boundary losss": https://github.com/LIVIAETS/boundary-loss/blob/master/losses.py#L76

Notice that in practice, the only "difficult" part is the pre-processing and distance transform (for Keras it had to be designed slightly differently https://github.com/LIVIAETS/boundary-loss/blob/master/keras_loss.py ), and the element-wise multiplication of the softmax probabilities and distance map is trivial in itself.