Closed JadeXIN closed 4 years ago
Hi, sorry for the late reply! We also found that feeding normalized feature vectors to GNNs would lead to a minor loss in performance.
Where does the inf value happen?
Hi, no worries~ I just implement the code of paper "Neighborhood-Aware Attentional Representation for Multilingual Knowledge Graphs", which is based on the code of RDGCN, on OpeanEA dataset. As the word embedding applied is different, so the inf values occurred in tf.exp operation. I changed the tensor data type to float64 can solve this issue. thank you~
Hi,
I found in method get_pretrained_input(), the initial embedding of each entity is not normalized, the l2_normalize in original code of RDGCN is comment out. Could you explain the reason why you remove the l2.normalize here. I have tried adding the l2_normalize for EN_DE dataset and found the performance would decrease. But without l2_normalize seems also strange and is easy to cause inf value. Could you give a possible solution for this issue?
Best Regards