This repo contains the code for our paper "A novel focal Tversky loss function and improved Attention U-Net for lesion segmentation" accepted at IEEE ISBI 2019.
so that negative examples will start training with no loss, and positive examples will have very high loss. Did you experiment with this bias setting? I could not find good results with it with FTL.
Hi John, I initialize all weights to follow glorot_normal distribution but did not experiment with the original paper's init method. Is there a large difference in DSC when using the init versus not using it?
The original focal loss paper initialize the bias on the final sigmoid conv layer like so:
init = tf.constant_initializer([-np.log(0.99/0.01)])
so that negative examples will start training with no loss, and positive examples will have very high loss. Did you experiment with this bias setting? I could not find good results with it with FTL.