twitter-archive / torch-autograd

Autograd automatically differentiates native Torch code
Apache License 2.0
560 stars 115 forks source link

I can't pass tensors to AutoModule #150

Open dlmacedo opened 8 years ago

dlmacedo commented 8 years ago

Dear friends,

I am trying to use AutoModule and getting the following error:

...torch/install/share/lua/5.1/autograd/auto/AutoModule.lua:62: invalid arguments: FloatTensor number nil

In short, my code is the following:

local initialWeight = torch.Tensor(2,2):fill(1.234) local initialBias = torch.Tensor(1,2):fill(5.678) local autogradFunc = autograd.nn.AutoModule('TestReLU')(units.agSReLU, initialWeight:clone(), initialBias:clone()) model:add(autogradFunc)

See my "full" complete code below:

`local function createModel(opt)
   local model = nn.Sequential()

   -- building block
   local function Block(nInputPlane, nOutputPlane)
      model:add(nn.SpatialConvolution(nInputPlane, nOutputPlane, 3,3, 1,1, 1,1):noBias())
      model:add(nn.SpatialBatchNormalization(nOutputPlane,1e-3))
      -- Adding a condition to use ReLU...
      if opt.unit == "ReLU" then
      model:add(nn.ReLU(true))
      -- Adding others options of units...
      elseif opt.unit == "agReLU" then
        local autogradFunc = autograd.nn.AutoModule('AutoGradReLU')(units.agReLU)
        model:add(autogradFunc)
      elseif opt.unit == "agSReLU" then
        local initialWeight = torch.Tensor(2,2):fill(1.234)
        local initialBias = torch.Tensor(1,2):fill(5.678)
        local autogradFunc = autograd.nn.AutoModule('TestReLU')(units.agSReLU, initialWeight:clone(), initialBias:clone())
        model:add(autogradFunc)
      end
      -- Ending adding other options of units...
      return model
   end

   local function MP()
      model:add(nn.SpatialMaxPooling(2,2,2,2):ceil())
      return model
   end

   local function Group(ni, no, N, f)
      for i=1,N do
         Block(i == 1 and ni or no, no)
      end
      if f then f() end
   end

   Group(3,64,2,MP)
   Group(64,128,2,MP)
   Group(128,256,4,MP)
   Group(256,512,4,MP)
   Group(512,512,4)
   model:add(nn.SpatialAveragePooling(2,2,2,2):ceil())
   model:add(nn.View(-1):setNumInputDims(3))
   model:add(nn.Linear(512,opt and opt.num_classes or 10))

   utils.FCinit(model)
   utils.testModel(model)
   utils.MSRinit(model)

   return model
end

`

I am getting the following complete version of the error:

/home/dlm/torch/install/bin/luajit: /home/dlm/torch/install/share/lua/5.1/nn/Container.lua:67: In 52 module of nn.Sequential: ...torch/install/share/lua/5.1/autograd/auto/AutoModule.lua:62: invalid arguments: FloatTensor number nil expected arguments: FloatTensor [FloatTensor] float | FloatTensor [FloatTensor] [float] FloatTensor stack traceback: [C]: in function 'add' ...torch/install/share/lua/5.1/autograd/auto/AutoModule.lua:62: in function 'accGradParameters' /home/dlm/torch/install/share/lua/5.1/nn/Module.lua:32: in function </home/dlm/torch/install/share/lua/5.1/nn/Module.lua:29> [C]: in function 'xpcall' /home/dlm/torch/install/share/lua/5.1/nn/Container.lua:63: in function 'rethrowErrors' /home/dlm/torch/install/share/lua/5.1/nn/Sequential.lua:84: in function 'backward' /home/dlm/projects/wide-residual-networks/models/utils.lua:29: in function 'testModel' models/vgg.lua:72: in function <models/vgg.lua:26> train.lua:67: in main chunk [C]: in function 'dofile' .../dlm/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk [C]: at 0x00406670

WARNING: If you see a stack trace below, it doesn't point to the place where this error occured. Please use only the one above. stack traceback: [C]: in function 'error' /home/dlm/torch/install/share/lua/5.1/nn/Container.lua:67: in function 'rethrowErrors' /home/dlm/torch/install/share/lua/5.1/nn/Sequential.lua:84: in function 'backward' /home/dlm/projects/wide-residual-networks/models/utils.lua:29: in function 'testModel' models/vgg.lua:72: in function <models/vgg.lua:26> train.lua:67: in main chunk [C]: in function 'dofile' .../dlm/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk [C]: at 0x00406670

Can anybody helps me?

Thanks