xiangwang1223 / knowledge_graph_attention_network

KGAT: Knowledge Graph Attention Network for Recommendation, KDD2019
MIT License
1.04k stars 309 forks source link

attention score calculate in Knowledge-aware Attention #43

Open kinglai opened 3 years ago

kinglai commented 3 years ago

IN Knowledge-aware Attention, the attention score is fixed during one epoch training ? image

Cause attentive laplacian matrix is used for attention and only update in kg training schedule. image

thx

kinglai commented 3 years ago

Only consider the cf part, is the code you released equal to NGCF model. Neural Graph Collaborative Filtering ?

RileyLee95 commented 2 years ago

it seems the attention matrix A is kept same in all propagation layers (within each epoch), I am wondering shouldn't we calculate attention matrix for each layer separately based on node embeddings in that layer according to equation 4 and 5? @xiangwang1223

Cinderella1001 commented 1 year ago

it seems the attention matrix A is kept same in all propagation layers (within each epoch), I am wondering shouldn't we calculate attention matrix for each layer separately based on node embeddings in that layer according to equation 4 and 5? @xiangwang1223

I also have the same question. Moreover, the attention scores on the knowledge graph are very hard to compute, because this process costs too many memories.Can the process be optimised by multiprocessing?

Cinderella1001 commented 1 year ago

The attention score is actually fixed during one epoch training.But attention score in next epoch(also fixed) is different from attention score in this epoch. Maybe, Updating attention score in different layers of one epoch can achieve a little better performence .However, In my opinion, attention score is updated this way can try to avoid memory-out error.