Closed ntelo007 closed 3 years ago
@ntelo007 try this repository where I converted this code to work with Keras/Tensorflow
Thank you. Have you tested this loss function? Did you retrieve comparable results with the paper? Does it work?
on an experiment with noisy labels (labels came from something like Frangi filter + manual filtration), I retrieve sensible gain On clean dataset usage of dice+cldice result in minor performance gain expressed in variance reduction Please share your experience
Unfortunately, it is probably not working. I tried training a road detection algorithm and these are the results of the training procedure (both for only clDice and with the combination with dice):
Only with the clDice loss:
clDice + dice:
Any ideas why this is happening?
it looks like the whole training procedure is wrong. Try first to overfit your model on a small dataset (it can be several images, but not one if you use batchnorm). If the model is able to remember small dataset labels, then move to the evaluation of the validation procedure.
The training procedure works well for many other loss functins. This means that the problem occurs only when I use the soft_clDice_loss function. This is the prediction result:
It's obvious that the soft_skeletonization procedure is not correct. Because I don't think the math behind the comparison is the problem.
@ntelo007 have you look by eyes on the train image after soft_skeletonization? How is model loss log curves look like when you train only on dice?
in my experiments, cldice makes prediction prone to the thickness rather thinness
Hi,
could you help me create a Keras or TensorFlow version of this loss function?