Open kinglai opened 3 years ago
Only consider the cf part, is the code you released equal to NGCF model. Neural Graph Collaborative Filtering ?
it seems the attention matrix A is kept same in all propagation layers (within each epoch), I am wondering shouldn't we calculate attention matrix for each layer separately based on node embeddings in that layer according to equation 4 and 5? @xiangwang1223
it seems the attention matrix A is kept same in all propagation layers (within each epoch), I am wondering shouldn't we calculate attention matrix for each layer separately based on node embeddings in that layer according to equation 4 and 5? @xiangwang1223
I also have the same question. Moreover, the attention scores on the knowledge graph are very hard to compute, because this process costs too many memories.Can the process be optimised by multiprocessing?
The attention score is actually fixed during one epoch training.But attention score in next epoch(also fixed) is different from attention score in this epoch. Maybe, Updating attention score in different layers of one epoch can achieve a little better performence .However, In my opinion, attention score is updated this way can try to avoid memory-out error.
IN Knowledge-aware Attention, the attention score is fixed during one epoch training ?
Cause attentive laplacian matrix is used for attention and only update in kg training schedule.
thx