guoyang9 / NFM-pyorch

A pytorch implementation for He et al. Neural Factorization Machines for Sparse Predictive Analytics on SIGIR 2017.
38 stars 11 forks source link

bug report #3

Closed AmazingDD closed 5 years ago

AmazingDD commented 5 years ago

Traceback (most recent call last): File "main.py", line 126, in prediction = model(features, feature_values) File "C:\Users\qiwang\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 493, in call result = self.forward(*input, *kwargs) File "C:\Users\qiwang\Desktop\NFM-pyorch-master\model.py", line 83, in forward nonzero_embed = self.embeddings(features) File "C:\Users\qiwang\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 493, in call result = self.forward(input, **kwargs) File "C:\Users\qiwang\anaconda3\lib\site-packages\torch\nn\modules\sparse.py", line 117, in forward self.norm_type, self.scale_grad_by_freq, self.sparse) File "C:\Users\qiwang\anaconda3\lib\site-packages\torch\nn\functional.py", line 1506, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: Expected tensor for argument #1 'indices' to have scalar type Long; but got CUDAType instead (while checking arguments for embedding)

when I run your original code, bug raised above, can you fix it?

guoyang9 commented 5 years ago

The default device of this code is using GPUs. If you want to try it on CPUs, you can adapt in this way. device = torch.device('cpu'), and change line 96, 121-123 in main.py from . coda() to .to(device). This should work.

AmazingDD commented 5 years ago

The default device of this code is using GPUs. If you want to try it on CPUs, you can adapt in this way. device = torch.device('cpu'), and change line 96, 121-123 in main.py from . coda() to .to(device). This should work.

well, I don't think its the device issue. But firstly I modified the code as you say, and it report bugs as below Traceback (most recent call last): File "NFMRecommender.py", line 305, in <module> prediction = model(features, feature_values) File "C:\Users\dyu19\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 547, in __call__ result = self.forward(*input, **kwargs) File "NFMRecommender.py", line 102, in forward nonzero_embed = self.embeddings(features) File "C:\Users\dyu19\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 547, in __call__ result = self.forward(*input, **kwargs) File "C:\Users\dyu19\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\nn\modules\sparse.py", line 114, in forward self.norm_type, self.scale_grad_by_freq, self.sparse) File "C:\Users\dyu19\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\nn\functional.py", line 1467, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: Expected tensor for argument #1 'indices' to have scalar type Long; but got torch.IntTensor instead (while checking arguments for embedding)

So maybe it's issue about the forward function in model.py when do prediction. As it said, it expected scalar type Long, but torch.IntTensor or CUDAType instead. So is that the dataset problem?

guoyang9 commented 5 years ago

This is definitely not the dataset problem. I'm not sure which pytorch version you use, but this bug is not found under my environment setting. You can try to update this line via adding dtype=np.int64. This should solve your problem. Because the input index to nn.Embedding() should be long type.

AmazingDD commented 5 years ago

This is definitely not the dataset problem. I'm not sure which pytorch version you use, but this bug is not found under my environment setting. You can try to update this line via adding dtype=np.int64. This should solve your problem. Because the input index to nn.Embedding() should be long type.

That's work, Thx! :)