torch / demos

Demos and tutorials around Torch7.
355 stars 301 forks source link

CUDA version of the train-on-cifar.lua #33

Closed srp1970 closed 8 years ago

srp1970 commented 8 years ago

Has any one done train-on-cifar for CUDA? I tried converting this to CUDA version from CPU version which is given, by doing this:

if opt.network == '' then -- define model to train model = nn.Sequential() model:add(nn.Copy('torch.FloatTensor','torch.CudaTensor'):cuda())

and then:

-- load dataset trainData = { data = torch.CudaTensor(50000, 3072), labels = torch.CudaTensor(50000), size = function() return trsize end }

But it gives the following error:

preprocessing data (color space + normalization) /usr6/prakash/DNN/Torch/luajit-rocks/bin/luajit: ...sh/DNN/Torch/luajit-rocks/share/lua/5.1/nn/CSubTable.lua:10: bad argument #1 to 'resizeAs' (torch.DoubleTensor expected, got torch.CudaTensor) stack traceback: [C]: in function 'resizeAs' ...sh/DNN/Torch/luajit-rocks/share/lua/5.1/nn/CSubTable.lua:10: in function 'updateOutput' ...cks/share/lua/5.1/nn/SpatialSubtractiveNormalization.lua:89: in function 'updateOutput' ...h/DNN/Torch/luajit-rocks/share/lua/5.1/nn/Sequential.lua:44: in function 'forward' ...cks/share/lua/5.1/nn/SpatialContrastiveNormalization.lua:29: in function 'forward' ...akash/DNN/Torch/luajit-rocks/share/lua/5.1/nn/Module.lua:276: in function <...akash/DNN/Torch/luajit-rocks/share/lua/5.1/nn/Module.lua:275> [C]: in function 'normalization' train-on-cifar-cuda.lua:193: in main chunk [C]: in function 'dofile' ...Torch/luajit-rocks/lib/luarocks/rocks/trepl/scm-1/bin/th:131: in main chunk [C]: at 0x004051e0 I also tried setdefaultTensor type to float (as suggested in another thread), but that does not work either. Thanks for any suggestions. Prakash
soumith commented 8 years ago

I've already answered your exact question here: https://github.com/torch/demos/issues/30