flyingdoog / PGExplainer

Parameterized Explainer for Graph Neural Network
123 stars 15 forks source link

The loss in the code is not the same as in the paper #8

Closed hcn323 closed 2 years ago

hcn323 commented 2 years ago

the loss in the paper is: image but in code: image you just take out the probability of the item corresponding to the label,and use -tf.math.log() to get the predloss,$\sum{c=1}^C$ I don't think it shows up in code.

flyingdoog commented 2 years ago

Other cases of c are 0. Please refer to the implementation of cross entropy.