Closed soroush-ziaeinejad closed 2 years ago
@soroush-ziaeinejad 1) Where do they address the cold start item/user? Do they have a specific experiments for these scenarios? 2) I am thinking of using inconsistent neighbors as negative samples (in addition to positive/consistent neighbors) 3) One item your awesome reviews misses is the gaps of the current paper. Please add this item to this and previous summaries. Thank you.
@hosseinfani
Thanks for the feedback.
Main problem:
The main problem is trying to find a way to alleviate the cold start issue. This paper proposed a method to empower the GNN (that is used in other works) to resolve the social inconsistency problem.
Social inconsistency problem:
Two connected users in the social graph are not necessarily interested in the same topic. Also, being connected with an item does not always mean that the user is interested in the item because of the rating value. In other words, two users who both rated the same item can have completely different ideas about the item. (User A: 5 stars, user B: 1 star)
Existing shortcomings:
ConsisRec:
Embedding Layer: concatenating user and item instances and embedding them to a single entity called 'query'.
Query Layer: extracting the consistent neighbors of the user, and the consistent neighbors of the item
Neighbor Sampling: Dynamically samples social neighbors based on social items. This can boost the GNN power and improve ranking performance.
Relation Attention: Aggregating the neighbor sampling output, the embeddings.
Finally, the inner product of the (user embedding + user) and (item embedding + item) outputs the rating prediction and final loss can be calculated.
Inputs:
Outputs:
Experiments:
Baselines:
ConsisRec outperforms all these baselines on both datasets and improves both RMSE and MAE metrics at least for 1.7 percent.
Code:
The code of this paper is available here
Presentation:
There is no available presentation for this paper.