Open wangxuekui opened 5 years ago
Thanks for your interest. Blocking the attentive embedding propagation part from the KGE part is to reduce the memory-out error :( We are working on how to simplify KGAT. Thanks.
Thanks for your reply, with attentive embedding propagation only used in recommendation training, in this way items with more user interactions will have more effective information propagation from other entities, it's not beneficial to cold start items who have very few user interactions, so with Attentive Embedding Propagation used in KGE, maybe we can get better performance especially for cold start items.
Hi, in your work,Attentive Embedding Propagation is only for recommendation training, why not used in the training process of KGE ? Have you conduct any experiment for comparison?