Open sachinsharma9780 opened 2 years ago
Hi Sachin,
Thanks for your interest in our work! Our method is item-inductive but not user-inductive, because we have an item KG which can help us calculate the representation of a new item, but we do not have such KG at the user end.
Thank you for the response @hwwang55 .
So if I add a new user (u1) in the interaction matrix (Y), lets say with some engagements and now we want to find out if the new user (u1) will engages with the movie lets say "Titanic" and the titanic movie is already there in the KG then in this case can't we generate user specific (u1) movie (titanic) embeddings?
I am just thinking under the perspective if we want to build out movie recommendation application using your proposed algorithm where we somehow recommend movies to new users without retraining the KGCN algo.
It depends on how you design user embeddings. If user embeddings are randomly initialized embedding vectors, you cannot deal with the cold start problem. If user embeddings are based on user features, e.g., output by an MLP that takes users' initial feature as input, then you can do the inference without re-training the model. Thanks!
I am going through your other paper "Knowledge-aware Graph Neural Networks with Label Smoothness Regularization for Recommender Systems" which claims that Label Smoothness adds inductive bias in algorithm.
Can this algorithm generalise to new users without retraining?
You still need an MLP to calculate user embeddings in KGNN-LS
Thanks for clarifying. So in the paper user embeddings are are randomly initialized embedding vectors, isn't it ?
Correct. You can of course calculate user embeddings using their initial features if available.
Just a question out of curiosity;
So if the user features are available (e.g. demographics, sex, etc) then we can create user embeddings via MLP. Afterwards how we can use these embeddings to generate recommendations for a new user ?
The another main difficulty is to find a standard dataset which provides the information about user feature like demographics, etc. However, I dont think user information is provided by any standard dataset.
Once you have user embedding, you can use it to calculate the user-specific adjacency matrix, then running GCN on this adjacency matrix. Item embeddings are contained in the output of the GCN. Finally you can predict user engagement labels using user and item embeddings.
Once you have user embedding, you can use it to calculate the user-specific adjacency matrix, then running GCN on this adjacency matrix. Item embeddings are contained in the output of the GCN. Finally you can predict user engagement labels using user and item embeddings.
In this case our model needs to be trained on user specific side information?
@sachinsharma9780 are you able to create the kg for a new dataset? Can you list steps, if you don't mind?
Hi,
I am going through the literature of the paper and one thing which I find missing is information about inductiveness of the proposed algorithm.
So my question is, Is the proposed architecture inductive in nature i.e. it can generalise to new users as well without retraining?
Thanks Sachin