Open AlexVaith opened 3 years ago
Which algorithm are you using? If you are using a pure
algorithm such as SVD or itemCF, you can pass users and items as list to the predict
function. To deal with the new user problem, set the cold_start
parameter.
>>> model.predict(user=[1,2,3], item=[4,5,6], cold_start="average")
If you are using a feat
algorithm such as wide_deep or deepFM, you can use the predict_data_with_feats
function, which can predict on new data and new users. You can find some brief explanations in User Guide. The example script changing_feature_example.py also show some usages of it in line 65 - 68.
I didn't put much focus on this problem, so I guess that's why people are hard to find them:) Actually, I don't consider the prediction problem important because, it is a recommender system. In real-world scenarios, the main job of it should be recommending instead of predicting. I've seen a lot of open-source recommender systems on Github, but few of them actually provides a recommend
function. That really makes me confused, so I built this library in order to create a "real" recommender system.
I get your idea, but for evaluation purposes it is nice to get an overview about how the prediction error is distributed. Therefore I would like to use the evaluation set as I would do with other machine learning tasks.
I will have a look at the predict data with feats function.
In general, do you plan to transform the current tf 1.x code to tf 2.0?
Yes you can use the evaluate
function to evaluate on the new data directly, see line 76 in changing_feature_example.py.
I do have a tf2.x plan, but certainly it's not my top priority. It's really painful to transfer all the codes. Besides, I implemented some of the algorithms in tf2.x, but found that the training speed is much slower than tf1.x, so.... But this is not a formal benchmark, since I implemented it very casually.
Hey, the package, in general, is very nice. However, I have not found a way to use a model on new data or even on the test set. From my personal perspective, it is quite unnecessary to test the predictions on the training data. Maybe I have misunderstood the predict function from the model, but from what I can tell, the user id is mapped to the training set right?
Could you provide a solution to my problem?