Closed BhaveshBhansali closed 4 weeks ago
The model will make prediction for every item in recommend_user
, which means it will compute scores for 900k items in one recommendation. A batch of 20 users will compute 18 million items at once, no wonder your jupyter kernel gets killed.
Faster? Maybe using a better GPU?
Thank you for your answer.
Is there explicit parameter to utilize GPU while training and inference?
TensorFlow will use GPU automatically if you have one. Make sure you have enough GPU memory to compute 900k items.
Thank you for your answer.
I trained deepFM model using approx. 4.5 million rows consisting of approx. 1.5 million unique users and 900k unique items.
While batch predicting for approx. 20 users at a time, my jupyter kernel gets killed. I am using 64GM machine.
res = model.recommend_user(user=list_of_users, n_rec=20)
Could you please give tip to make batch inference faster?
Many thanks in advance.