torralba-lab / im2recipe

Code supporting the CVPR 2017 paper "Learning Cross-modal Embeddings for Cooking Recipes and Food Images"
MIT License
370 stars 89 forks source link

main.lua: inconsistent tensor size #12

Closed t-f-o-t-s closed 6 years ago

t-f-o-t-s commented 6 years ago

I followed the README and ran th main.lua -dataset data/data.h5 -ingrW2V data/text/vocab.bin -net resnet -resnet_model data/vision/resnet-50.t7 -snapf ile snaps/snap -dispfreq 1000 -valfreq 10000 -batchSize 64 got a following trace back

/home/hoge/torch/install/bin/luajit: /home/hoge/torch/install/share/lua/5.1/threads/threads.lua:183: [thread 1 callback] inconsistent tensor size, expected tensor [7 x 1024] and src [7 x 8] to have the same number of elements, but got 7168 and 56 elements respectively at /tmp/luarocks_torch-scm-1-2888/torch7/lib/TH/generic/THTensorCopy.c:120 stack traceback: [C]: at 0x7fad383a0910 [C]: in function '__newindex' ./loader/DataLoader.lua:112: in function <./loader/DataLoader.lua:38> [C]: in function 'xpcall' /home/hoge/torch/install/share/lua/5.1/threads/threads.lua:234: in function 'callback' /home/foge/torch/install/share/lua/5.1/threads/queue.lua:65: in function </home/hoge/torch/install/share/lua/5.1/threads/queue.lua:41> [C]: in function 'pcall' /home/hoge/torch/install/share/lua/5.1/threads/queue.lua:40: in function 'dojob' [string " local Queue = require 'threads.queue'..."]:15: in main chunk stack traceback: [C]: in function 'error' /home/hoge/torch/install/share/lua/5.1/threads/threads.lua:183: in function 'dojob' /home/hoge/torch/install/share/lua/5.1/threads/threads.lua:223: in function 'addjob' /mnt/hdd/hoge/codes/im2recipe/drivers/train.lua:191: in function </mnt/hdd/hoge/codes/im2recipe/drivers/train.lua:189> /mnt/hdd/hoge/codes/im2recipe/drivers/init.lua:43: in function 'train' main.lua:70: in main chunk [C]: in function 'dofile' ...ujii/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk

In case of testing your pretrained model, I got a similar message

/home/hoge/torch/install/bin/luajit: /home/hoge/torch/install/share/lua/5.1/threads/threads.lua:183: [thread 1 callback] inconsistent tensor size, expected tensor [15 x 1024] and src [15 x 8] to have the same number of elements, but got 15360 and 120 elements respectively at /tmp/luarocks_torch-scm-1-2888/torch7/lib/TH/generic/THTensorCopy.c:120 stack traceback: [C]: at 0x7fefecfcb910 [C]: in function '__newindex' ./loader/TestDataLoader.lua:93: in function <./loader/TestDataLoader.lua:28> [C]: in function 'xpcall' /home/hoge/torch/install/share/lua/5.1/threads/threads.lua:234: in function 'callback' /home/hoge/torch/install/share/lua/5.1/threads/queue.lua:65: in function </home/hoge/torch/install/share/lua/5.1/threads/queue.lua:41> [C]: in function 'pcall' /home/hoge/torch/install/share/lua/5.1/threads/queue.lua:40: in function 'dojob' [string " local Queue = require 'threads.queue'..."]:15: in main chunk stack traceback: [C]: in function 'error' /home/hoge/torch/install/share/lua/5.1/threads/threads.lua:183: in function 'dojob' /home/hoge/torch/install/share/lua/5.1/threads/threads.lua:223: in function 'addjob' /mnt/hdd/hoge/codes/im2recipe/drivers/test.lua:38: in function </mnt/hdd/hoge/codes/im2recipe/drivers/test.lua:24> /mnt/hdd/hoge/codes/im2recipe/drivers/init.lua:47: in function </mnt/hdd/hoge/codes/im2recipe/drivers/init.lua:45> /mnt/hdd/hoge/codes/im2recipe/drivers/init.lua:43: in function 'test' main.lua:50: in main chunk [C]: in function 'dofile' ...ujii/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk [C]: at 0x00405d00

I'm using recipe1M dataset. I think because of differences of environment (torch version, os, etc) Is it possible to change tensor and src size? Thank you in advance!

t-f-o-t-s commented 6 years ago

After decreasing stDim to 8 it looks working fine for now. Does the tensor size depending on the environment?

nhynes commented 6 years ago

Thanks for the report! May I ask whether you trained your own version of skip-thoughts using stDim = 8? Otherwise, that's an interesting bug that I'll look into further.

t-f-o-t-s commented 6 years ago

OMG. According to history, I used dim=8 on skip-thought training somehow. I'm sorry for a silly report.