chl8856 / DeepHit

DeepHit: A Deep Learning Approach to Survival Analysis with Competing Risks
171 stars 74 forks source link

LOSS-FUNCTION 1 -- Log-likelihood loss #9

Open pfarisel opened 9 months ago

pfarisel commented 9 months ago

Hi, it appears to me that in the Loss you compute ax +(1-a)x = x Thus there is no differences between censored and non-censured instances. Anyway, this does not implement the loss indicated in the paper.

Anivader commented 8 months ago

Hi @pfarisel , DeepHit loss is defined as L = aL1 + (1-a)L2. Not sure how you are replacing the terms with "x"

pfarisel commented 7 months ago

Hi, what it is not cleat to me in def loss_Log_Likelihood(self) is the fact that tmp1 is equal to tmp2 then you put zero some element in tmp1 end the opposite element in tmp2 the you add them Obtaining back the original tmp1 (or tmp2). In my opinion, since tmp1 = tmp2 tmp1 + 1.0tmp2 == I_1 log(tmp1) + (1. - I_1) * log(tmp1) = log(tmp1)

What am I missing?

Then the code: I_1 = tf.sign(self.k) tmp1 = tf.reduce_sum(tf.reduce_sum(self.fc_mask1 self.out, reduction_indices=2), reduction_indices=1, keep_dims=True) tmp1 = I_1 log(tmp1)

for censored: log \sum P(T>t|x)

tmp2 = tf.reduce_sum(tf.reduce_sum(self.fc_mask1 self.out, reduction_indices=2), reduction_indices=1, keep_dims=True) # this equal to tmp1 before I_1 tmp2 = (1. - I_1) log(tmp2) self.LOSS_1 = - tf.reduce_mean(tmp1 + 1.0tmp2) # = I_1 log(tmp1) + (1. - I_1) * log(tmp1)