torch / rnn

Torch recurrent neural networks
BSD 3-Clause "New" or "Revised" License
64 stars 17 forks source link

AbstractRecurrent assert checks incorrect nn.Module class #42

Closed tastyminerals closed 6 years ago

tastyminerals commented 6 years ago

I have recreated exactly the same recurrent language model code and it still crashes on the last line with AbstractRecurrent.lua:9: nn.Recursor expecting nn.Module instance at arg 1

  local net = nn.Sequential() -- main network container

  -------------------- input layer --------------------
  local lookup = nn.LookupTable(#trainset.ivocab, opt.inputsize)
  net:add(lookup)

  if opt.dropout > 0 then
        net:add(nn.Dropout(opt.dropout))
  end

  -------------------- Recurrent layer --------------------
  local stepmodule = nn.Sequential()

  local rnn = nn.RecGRU(opt.inputsize, opt.hiddensize[1])
  stepmodule:add(rnn)

  -------------------- Output layer --------------------

  if opt.dropout > 0 then
    stepmodule:add(nn.Dropout(opt.dropout))
  end

  stepmodule:add(nn.Linear(opt.hiddensize[1],1))
  stepmodule:add(nn.Sigmoid())

  -- adding recurrency
  net:add(nn.Sequencer(stepmodule))  -- <-- crash!
tastyminerals commented 6 years ago

Commenting out line 9 in AbstarctRecurrent.lua kind of fixes this.

-- assert(torch.isTypeOf(stepmodule, 'nn.Module'), torch.type(self).." expecting nn.Module instance at arg 1")

The stepmodule that gets checked in AbstractRecurrent.lua is not of nn.Module type as required by assert, it is of nn.StepGRU type. Is this is a bug?

nicholas-leonard commented 6 years ago
th> require 'rnn'
th> stepmodule = nn.SeqGRU(100,100)
th> print(torch.isTypeOf(stepmodule, 'nn.Module'))
true    
th> asserted, message = assert(torch.isTypeOf(stepmodule, 'nn.Module'), torch.type(stepmodule).." expecting nn.Module instance at arg 1")
th> asserted, message = assert(torch.isTypeOf(nil, 'nn.Module'), torch.type(nil).." expecting nn.Module instance at arg 1")
[string "asserted, message = assert(torch.isTypeOf(nil..."]:1: nil expecting nn.Module instance at arg 1
stack traceback:
    ...a/build-Darwin-x86_64-Lua53/share/lua/5.3/trepl/init.lua:32: in function <...a/build-Darwin-x86_64-Lua53/share/lua/5.3/trepl/init.lua:25>
    [C]: in function 'assert'
    [string "asserted, message = assert(torch.isTypeOf(nil..."]:1: in main chunk
    [C]: in function 'xpcall'
    ...a/build-Darwin-x86_64-Lua53/share/lua/5.3/trepl/init.lua:186: in function 'trepl'
    ...arwin-x86_64-Lua53/lib/luarocks/rocks/trepl/scm-1/bin/th:248: in main chunk
    [C]: in ?   
tastyminerals commented 6 years ago

Ok, I will test recurrent-language-model.lua now.

tastyminerals commented 6 years ago

recurrent-langauge-model.lua from rnn/examples also crashes with the same error.

if opt.gru then -- Gated Recurrent Units
  rnn = nn.RecGRU(inputsize, hiddensize)
  print(torch.isTypeOf(rnn, 'nn.Module'))
  print(torch.type(rnn))
true    
nn.RecGRU   
/home/pavel/torch/install/bin/luajit: ...el/torch/install/share/lua/5.1/rnn/AbstractRecurrent.lua:9: nn.Recursor expecting nn.Module instance at arg 1
stack traceback:
    [C]: in function 'assert'
    ...el/torch/install/share/lua/5.1/rnn/AbstractRecurrent.lua:9: in function '__init'
    /home/pavel/torch/install/share/lua/5.1/rnn/Recursor.lua:10: in function '__init'
    /home/pavel/torch/install/share/lua/5.1/torch/init.lua:91: in function </home/pavel/torch/install/share/lua/5.1/torch/init.lua:87>
    [C]: in function 'Recursor'
    /home/pavel/torch/install/share/lua/5.1/rnn/Sequencer.lua:22: in function '__init'
    /home/pavel/torch/install/share/lua/5.1/torch/init.lua:91: in function </home/pavel/torch/install/share/lua/5.1/torch/init.lua:87>
    [C]: in function 'Sequencer'
    recurrent-language-model.lua:126: in main chunk
    [C]: in function 'dofile'
    ...avel/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
    [C]: at 0x00405c80

The error occurs on lm:add(nn.Sequencer(stepmodule)) line. So, stepmodule is nn.Sequential instance containing nn.RecGRU and other network containers. I wonder if it is just messed up package install issue due to me constantly switching between old "ElementResearch/rnn" and "torch/rnn"...

tastyminerals commented 6 years ago

The issue was fixed after I removed and reinstalled Torch. Looks like rolling back to ElementResearch/rnn messes up the original installation.