Closed srtianxia closed 5 years ago
Hi! I think that your vocabulary size is 37458 (less than 50002) -- check the preprocessing logs to find out
Hi! I think that your vocabulary size is 37458 (less than 50002) -- check the preprocessing logs to find out
Thanks! I get the Amazon Instant Video from http://jmcauley.ucsd.edu/data/amazon/
I want to know how I can get the right data set. Thanks!
Hi! I think that your vocabulary size is 37458 (less than 50002) -- check the preprocessing logs to find out
Thanks! I get the Amazon Instant Video from
http://jmcauley.ucsd.edu/data/amazon/
I want to know how I can get the right data set. Thanks!
You can set the vocabulary size to correct one (e.g., 35000). See the script's parameters: parser.add_argument("-v", dest = "vocab_size", type = int, default = 50000, help = "Vocabulary Size (Default: 50000)").
Hi! I think that your vocabulary size is 37458 (less than 50002) -- check the preprocessing logs to find out
Thanks! I get the Amazon Instant Video from
http://jmcauley.ucsd.edu/data/amazon/
I want to know how I can get the right data set. Thanks!You can set the vocabulary size to correct one (e.g., 35000). See the script's parameters: parser.add_argument("-v", dest = "vocab_size", type = int, default = 50000, help = "Vocabulary Size (Default: 50000)").
Thanks !
Hi! very brilliant job, but i can't run the code
When i use pytorch==0.4.1 Loading pretrained word embeddings from "./datasets/amazon_instant_video/amazon_instant_video_wid_wordEmbed.npy".. Traceback (most recent call last): File "PyTorchTEST.py", line 101, in
mdl = mdlZoo.createAndInitModel()
File "/data1/sunrui/py_project/ANR/model/ModelZoo.py", line 49, in createAndInitModel
self.initModel()
File "/data1/sunrui/py_project/ANR/model/ModelZoo.py", line 81, in initModel
self.initANR()
File "/data1/sunrui/py_project/ANR/model/ModelZoo.py", line 109, in initANR
self.loadWordEmbeddings()
File "/data1/sunrui/py_project/ANR/model/ModelZoo.py", line 187, in loadWordEmbeddings
self.mdl.widwEmbed.weight.data.copy(torch.from_numpy(np_wid_wEmbed))
RuntimeError: The expanded size of the tensor (50002) must match the existing size (37458) at non-singleton dimension 0
When i use pytorch==0.3.1 Loading pretrained word embeddings from "./datasets/amazon_instant_video/amazon_instant_video_wid_wordEmbed.npy".. Traceback (most recent call last): File "PyTorchTEST.py", line 101, in
mdl = mdlZoo.createAndInitModel()
File "/data1/sunrui/py_project/ANR/model/ModelZoo.py", line 49, in createAndInitModel
self.initModel()
File "/data1/sunrui/py_project/ANR/model/ModelZoo.py", line 81, in initModel
self.initANR()
File "/data1/sunrui/py_project/ANR/model/ModelZoo.py", line 109, in initANR
self.loadWordEmbeddings()
File "/data1/sunrui/py_project/ANR/model/ModelZoo.py", line 187, in loadWordEmbeddings
self.mdl.widwEmbed.weight.data.copy(torch.from_numpy(np_wid_wEmbed))
RuntimeError: invalid argument 2: sizes do not match at /pytorch/torch/lib/THC/generic/THCTensorCopy.c:52
thanks !