Closed badripatro closed 8 years ago
There was some change in cutorch, please try updating cutorch (It fixed the problem for me)
@I did following steps, but got same error
luarocks install torch luarocks install cutorch
try: luarocks install cutorch 1.0-0
Thanks. It works now :+1:
hi,I meet this problem: /nn/LookupTable.lua:75: bad argument #3 to 'index' (Tensor | LongTensor expected, got torch.CudaLongTensor) stack traceback: [C]: in function 'index' ...uzhou/torch_new/install/share/lua/5.1/nn/LookupTable.lua:75: in function 'func' ...zhou/torch_new/install/share/lua/5.1/nngraph/gmodule.lua:345: in function 'neteval' ...zhou/torch_new/install/share/lua/5.1/nngraph/gmodule.lua:380: in function 'forward' ./misc/word_level.lua:91: in function 'forward' eval.lua:164: in function 'eval_split' eval.lua:189: in main chunk [C]: in function 'dofile' .../torch_new/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk [C]: at 0x00405e90 I ran the code successfully before. However ,after I update the torch ,It failed. And I have try: luarocks install cutorch 1.0-0 . Can you give me some advice? @idansc @jiasenlu @badripatro
Hi
I didn't check the code with the update torch. I'll try that and give you the feedback later.
Thanks
Jiasen
This is caused by recent uodate to lookupTable. I ended up with the following solution : warp any call to maskedFilled. if(mask:type() == 'torch.CudaTensor') then mask = mask:cudaByte() end data:maskedFill(mask, -9999999) if(mask:type() == 'torch.CudaByteTensor') then mask = mask:cuda() end
My torch is torch7, cuda is 8.0 and cudnn is cudnn-8.0. before, cuda is 7.5 and cudnn is cudnn-7.5.(this version It worked) @jiasenlu I have tried the method of @idansc but it still failed. And the error is still: /nn/LookupTable.lua:75: bad argument #3 to 'index' (Tensor | LongTensor expected, got torch.CudaLongTensor)
try installing cutorch againg (not version 1.0 this time) if it doesn't work go to lookupTable and revert the recent changes..
Thank you !@idansc, but now the error is : ./misc/ques_level.lua:122: invalid arguments: CudaTensor CudaTensor number expected arguments: CudaTensor CudaByteTensor float stack traceback: [C]: in function 'maskedFill' ./misc/ques_level.lua:122: in function 'forward' eval.lua:172: in function 'eval_split' eval.lua:193: in main chunk [C]: in function 'dofile' .../torch_new/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk [C]: at 0x00405e90
Thank you very much! I have figured out by: if(self.mask:type() == 'torch.CudaTensor') then self.mask = self.mask:cudaByte() end out[i]:maskedFill(self.mask:narrow(2,t,1):contiguous():view(batch_size,1):expandAs(out[i]), 0) if(self.mask:type() == 'torch.CudaByteTensor') then self.mask = self.mask:cuda()
@idansc thanks!
/home1/raunak/torch/install/bin/luajit: ...e1/raunak/torch/install/share/lua/5.1/nn/LookupTable.lua:75: bad argument #3 to 'index' (Tensor | LongTensor expected, got torch.CudaLongTensor) stack traceback: [C]: in function 'index' ...e1/raunak/torch/install/share/lua/5.1/nn/LookupTable.lua:75: in function 'func' ...1/raunak/torch/install/share/lua/5.1/nngraph/gmodule.lua:345: in function 'neteval' ...1/raunak/torch/install/share/lua/5.1/nngraph/gmodule.lua:380: in function 'forward' ./misc/word_level.lua:91: in function 'forward' train.lua:254: in function 'lossFun' train.lua:312: in main chunk [C]: in function 'dofile' ...unak/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk [C]: at 0x00406670
Need some help @jiasenlu @idansc
thanks, Fixed now
Problem statement: I am getting following error
qlua: ./misc/maskSoftmax.lua:31: invalid arguments: CudaTensor CudaTensor number expected arguments: CudaTensor CudaByteTensor float stack traceback: [C]: at 0x7fd8cfc39b60 [C]: in function 'maskedFill' ./misc/maskSoftmax.lua:31: in function 'func' /home/cse/torch/install/share/lua/5.1/nngraph/gmodule.lua:345: in function 'neteval' /home/cse/torch/install/share/lua/5.1/nngraph/gmodule.lua:380: in function 'forward' ./misc/word_level.lua:92: in function 'forward'
predict.lua:142: in main chunk
I didn't change anything , i am using your code as it is . please let me know, how to figure it out.
Setup:
Excutation report: VQA/HieCoAttenVQA-master$ qlua predict.lua image_model/VGG_ILSVRC_19_layers_deploy.prototxt image_model/VGG_ILSVRC_19_layers.caffemodel cudnn [libprotobuf WARNING google/protobuf/io/coded_stream.cc:505] Reading dangerously large protocol message. If the message turns out to be larger than 1073741824 bytes, parsing will be halted for security reasons. To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h. [libprotobuf WARNING google/protobuf/io/coded_stream.cc:78] The total number of bytes read was 574671192 Successfully loaded image_model/VGG_ILSVRC_19_layers.caffemodel conv1_1: 64 3 3 3 conv1_2: 64 64 3 3 conv2_1: 128 64 3 3 conv2_2: 128 128 3 3 conv3_1: 256 128 3 3 conv3_2: 256 256 3 3 conv3_3: 256 256 3 3 conv3_4: 256 256 3 3 conv4_1: 512 256 3 3 conv4_2: 512 512 3 3 conv4_3: 512 512 3 3 conv4_4: 512 512 3 3 conv5_1: 512 512 3 3 conv5_2: 512 512 3 3 conv5_3: 512 512 3 3 conv5_4: 512 512 3 3 fc6: 1 1 25088 4096 fc7: 1 1 4096 4096 fc8: 1 1 4096 1000 Load the weight... total number of parameters in cnn_model: 20024384 total number of parameters in word_level: 8031747 total number of parameters in phrase_level: 2889219 total number of parameters in ques_level: 5517315 constructing clones inside the ques_level total number of parameters in recursive_attention: 2862056 qlua: ./misc/maskSoftmax.lua:31: invalid arguments: CudaTensor CudaTensor number expected arguments: CudaTensor CudaByteTensor float stack traceback: [C]: at 0x7fd8cfc39b60 [C]: in function 'maskedFill' ./misc/maskSoftmax.lua:31: in function 'func' /home/cse/torch/install/share/lua/5.1/nngraph/gmodule.lua:345: in function 'neteval' /home/cse/torch/install/share/lua/5.1/nngraph/gmodule.lua:380: in function 'forward' ./misc/word_level.lua:92: in function 'forward' predict.lua:142: in main chunk