Closed jmi5 closed 7 years ago
The problem with a predict_next function for the GRU is that evaluation becomes really slow. By utilizing the parallel computations, the batch function makes it manageable. I switched to this solution shortly after our first paper.
If you set the batch_size to 1 it will basically give you back the same results as the basic predict_next would (but keep in mind that the inputs for the functions are still 1D numpy arrays (of 1 sessionid and one itemid respectively), otherwise it won't work properly).
If the evaluation is still confusing, let me know what part you are talking about, and I'll try to explain it.
Hi,
First of all, thank you very much for posting this implementation - it's been very helpful to work through.
I see that there are
predict_next
methods defined for all of the baseline models - Pop, Item KNN etc - but not one for the actual GRU itself (it has apredict_next_batch
method, but I'm getting a bit confused trying to understand the batch evaluation, and thought I would fall back to the simpler case).I searched the repo for a predict_next function attached to the GRU, but could not find one. Would you mind posting one, or talking me through how I might implement it?
Thanks very much, Josh