Open rootAvish opened 6 years ago
The model contains m_U and m_V variables. After the training, the embedding of each user and item is already found. To predict, simply do dot product for a m_U and a m_V. If an item is not in the training set, then only the content could ever be used, which should pass through the encoder to get E[z] for m_V.
@eelxpeng Thank you for your reply, the one thing about it that I still do not understand is how this would produce different recommendations for users U1 and U2 if user U1 has read articles [A1, A2, A5] while U2 has read articles [A1, A3, A6]. The method you are describing does not consider this "evidence", does it? Sorry if it's a stupid question, I'm still new to this 😅
Of course it does. The information is considered during training. Two parts of information are included: collaborative info and content info. And that's how the model gets m_U and m_V. It is the central of the idea.
Hi, I was trying to reproduce your results but I noticed that the
test_users
andtest_items
parameters passed to the function here: https://github.com/eelxpeng/CollaborativeVAE/blob/master/lib/cvae.py#L152 are never actually used in its body to make a prediction. Can you please tell me how to use these to actually generate a recommendation using your code?