xiangwang1223 / knowledge_graph_attention_network

KGAT: Knowledge Graph Attention Network for Recommendation, KDD2019
MIT License
1.05k stars 309 forks source link

The use of func (model.update_attentive_A) maybe wrong in your code. #30

Open chengaojie0011 opened 4 years ago

chengaojie0011 commented 4 years ago

Thank you for offering the code of KGAT paper, and it helps me a lot. In running code, I found that the function (model.update_attentive_A) use self.A_in to update the matrix A. But as far as I know, TensorFlow cannot change the value of the variable in the static graph in this way. I found experimentally that whatever self.A_in is assigned, it doesn't change the result of the model. If so, KGAT does not use dynamic weights to implement GAT. Look forward to your reply.

johnnyjana730 commented 4 years ago

I am also interested to that question. Have you tried to randomly assign self.A_in and then try to call _create_xxx_embed() again to generate ua_embeddings, ea_embeddings?

xiangwang1223 commented 4 years ago

Sorry for the late reply after the busy weeks.

Hope this is helpful for you @chengaojie0011 @johnnyjana730 .

chengaojie0011 commented 4 years ago

Sorry for the late reply after the busy weeks.

  • Please distinguish the self.A_values, which is set as a placeholder, and self.A_in, which is used to feed the values of self.A_values.
  • I have tested the code and show the value in self.A_in (the first 20 values for limited space) in the following picture. As the figure shows, the self.A_in is updated as the epoch increases. image

Hope this is helpful for you @chengaojie0011 @johnnyjana730 .

Many thanks for your kind reply. The value of self.A_in has updated in class KGAT(object), but it doesn't update in the model because Tensorflow builds a static graph before a model can run. I'm not sure If I've explained this problem clearly. If you try to replace self.A_in with any value, you may find the model will still work, and the results of the model also not change.

xiangwang1223 commented 4 years ago

I have tested three variants, setting self.A_in's values as all zeros, all ones, and random values. The training performance is shown as follows:

Some observations:

  1. The results of three variants w.r.t. four evaluation metrics are different. It thus verifies that the model.update_attentive_A works.
  2. However, the differences are not that significant. We attribute this to some possible reasons: (1) the attention values are iteratively updated, rather than jointly updated with the other parameters. (2) simply applying the attention networks is insufficient to model the relational information. We have an ongoing work towards solving these issues and will later release the code when the work is finished.

Hope this is helpful. Thanks for your insightful comments.

ljch2018 commented 3 years ago

I think @chengaojie0011 is right. Because self.A_in is not a placeholder in graph, the assignment will not work, and never affect the network. https://github.com/xiangwang1223/knowledge_graph_attention_network/blob/c03737be46ac26a0b5431efe149828e982ffbbfb/Model/KGAT.py#L465-L466

if you set this exactly self.A_in to zero or one, the performance of the model will stay unchanged.