Closed gita0326 closed 1 year ago
(Sorry for the delayed reply. Somehow the Github notification for this repo are still broken on my side.)
Yes this is correct, scipy.ndimage.distance_transform_edt
. However, note that it is used at the dataloader level, to avoid this issue altogether: https://github.com/LIVIAETS/boundary-loss/blob/master/dataloader.py#L105
The implementation being quite efficient on CPU, this does not slow-down the training at all, and things are ready to be multiplied in the "Boundary losss": https://github.com/LIVIAETS/boundary-loss/blob/master/losses.py#L76
Notice that in practice, the only "difficult" part is the pre-processing and distance transform (for Keras it had to be designed slightly differently https://github.com/LIVIAETS/boundary-loss/blob/master/keras_loss.py ), and the element-wise multiplication of the softmax probabilities and distance map is trivial in itself.
Hi there, I'm currently working with a code that utilizes scipy.ndimage.distance_transform_edt, which needs to run on the CPU. However, I've encountered an issue where transferring data from the GPU to the CPU and then back to the GPU seems to prevent the parameters from being updated. I'm curious to know how you handle this situation in your codebase. Thank you!