thunlp / JointNRE

Joint Neural Relation Extraction with Text and KGs
MIT License
187 stars 36 forks source link

About loss_kg #19

Closed zhengzhuoxun closed 2 years ago

zhengzhuoxun commented 2 years ago

Hi, thank you for your work.

I have a question about the loss value in KG representation learning, aka loss_kg. In the code, loss_kg is set to be the difference between the score (tf.reduce_sum(abs(pos_h_e + pos_r_e - pos_t_e), 1, keep_dims = True))of positive and negative training data. Why do you set loss_kg like this, instead of the score of pos-data?

In my understanding, some head or tail in the triples of the negative train data have been randomly replaced by some false entities, so this loss value seems to make the KG-representation less affected by the noise. But won't this make it less possible, that head+relation be near to the tail in the KG-representation?

Thank you very much

zhengzhuoxun commented 2 years ago

I think I have got it, the loss_kg is used for KGC, that is to make the score of pos triples smaller than neg ones by a certain margin. It is loss_kg_att that is to learn the KG representation.

THUCSTHanxu13 commented 2 years ago

loss_kg is used to train entity and relation embeddings. These knowledge embeddings share parts of parameters with word embeddings, so that they can share kg and text information.

zhengzhuoxun commented 2 years ago

Thank you for answering. :)