Open AbdElrahmanMostafaRifaat1432 opened 2 years ago
You can see details here: https://github.com/NicolasHug/Surprise/blob/00904a11c39f4871102fa6daf0899cf9993a790d/surprise/prediction_algorithms/matrix_factorization.pyx#L260-L269
1: global mean + item bias 2: global mean + user bias 3: global mean
first , thank you for the answer
second , I am working now on a graduation project like amazon website so I expect a lot of incoming users and items
is the implementation you showed me effective for this case or is there a way that I can generate the latent factors for the new user or a new item
I hope if you can guide me generally even if surprise does not support the solution you will say
I will appreciate it too much if there is a concept that I can search about for this problem
is the implementation you showed me effective for this case
Not really, the mean of all ratings is pretty un-informative when it comes to recommending personalized items.
The problem you're facing is commonly called the "cold start problem". It's a hard problem in general. There are different ways to address it which are mostly out of scope for surprise. Perhaps there are ways that are specific to SVD as well, although I'm not aware of any personally.
Hope this can help your research!
Also check this thread: https://github.com/NicolasHug/Surprise/issues/208. There are forks out there that have tried to tackle this problem in surprise. I haven't checked them in details though
suppose we have user A and item B
I hope you can clarify how svd can predict ratings for the following 3 cases:
svd.predict(A,B)
1- user A not in the training set but item B is in the training set 2- user A is in the training set but item B is not in the training set 3- user A and item B are not in the training set
Note: I mean by training set is the data that I have trained the svd model on
I wonder how svd can predict for a new user or item that it doesnot even know them