Closed lolz0r closed 8 years ago
Hi @lolz0r ,
Just to be sure that the problem comes from using mini-batches, did you check first that your model works fine without mini-batches? E.g. the 'add' problem can appear if you use an even number as filter size in ConvLSTM, since the padding will not be done properly; hence only odd numbers should be used. Otherwise, your input looks fine, so it should work with mini-batches. If you post a sample model and the eval function for training, I will have a look and let you know if I spot anything.
cheers.
Hi @viorik, Thank you for the support - it ends up that I was using an even value for km. I made a little pull request which should warn others of this pitfall in the future.
https://github.com/viorik/ConvLSTM/pull/9
Best wishes, Michael
Hello, I am attempting to use batches with element research's rnn lib in conjunction with yours. The rnn lib requires that when the sequencer is being used tables of tensors are to be passed to the sequencer's forward() method. When wrapping a ConvLSTM with a sequencer the following error occurs:
CAddTable.lua:16: bad argument #2 to 'add' (sizes do not match ...)
The input to the rnn sequencer is a lua table: { tensor(batch, color, height, width), tensor(batch, color, height, width), tensor(batch, color, height, width) ... }
Which is the standard format for the rnn lib.
I ensured that I specified a batchSize when creating the ConvLSTM.
An interesting note is that the ConvLSTM runs correctly in batch mode when the sequence length is 1! (only 1 entry in the input lua table, instead of many)
Please let me know if you have any ideas! Best wishes, Michael