Closed Alexdruso closed 4 years ago
Thanks, I forgot to update the model. It should work now, but it tends to be quite slow compared to the others. Not sure if speed can be improved somehow.
I'm trying to use it right now with the CUDA version of PyTorch, it's superfast on my RTX 2060 😄
Ok good. You can also tweak the num_workers in DataLoader, 5 or so should allow to improve speed. How many samples per second do you get?
Around 23700 samples per second with default num_workers = 0 and batch_size = 128. Increasing num_workers breaks the code, increasing batch_size to 500 returns 50000 samples per second!
Hi, using the code in this repo I noticed that at line 25 of the MF_MSE_PyTorch recommender it is invoked
super(MF_MSE_PyTorch, self).__init__()
without the URM as an argument and this causes the impossibility to initialize the model.