Open gxyEPFL opened 8 years ago
@gxyEPFL why did you set pad=0? Does it work with the original configuration? Just trying to localise the issue.
@viorik yes, if I turn the pad back tokernelsize/2,
it works. Why I set pad = 0? I want to implement the
paper http://soumith.ch/pedestrian-cvpr-13.pdf the input image is 38**78, with kernel size (7,7), I want to set padding 0, stride 1, so that the output each feature map is (32 72), which looks to be good number for later max pooling?
I see. When using pad=0, I'd say the problem comes from the definition of SpatialConvFistaL1, line 118 in my demo code, 4th and 5th parameters. Now they are given the size of the input (W H)
, which is used to compute the weight for the loss layer. But instead you should replace those values with the size you want for the output, so that the loss weights have the right size.
Let me know if you get it to work.
@viorik hi again, I tried to change the input of the decoder to (32,72)
in my case. the 4th and 5th parameter of SpatialConvFistaL1, it's still size mismatch problem:
/unsupgpu/FistaL1.lua:116: bad argument #1
. Sorry...
Yes, several things probably need to be changed there. Did you un-pad the targets to have the reduced size? Targets should not have the same size as input.
I see two options: either you look carefully into the code to understand better how it works and how lua/torch works in general (this was my first project in lua actually), or the hacky solution: add Narrow
layer on the feature maps to keep only the central part, i.e. you would have to use Narrow
twice, once to crop out the undesired columns, then the undesired rows, or the other way around.
@viorik , I am terribly sorry. I may misunderstand at the beginning.
Targets should not have the same size as input?
why? Could you explain a little bit, thanks For autoencoder, it's like (enocder input(38,78), encoder output(32,72)) => (decoder input(32, 72), decoder output(38, 78)) ? [https://github.com/koraykv/unsup/blob/master/AutoEncoder.lua] code line 36-44,
I once thought the input and target in unsupgpu
is just the same, that's so called antoencoder...
And i run the demo code, when training model, load the dataset, the input
and target
are the same...
Sorry, maybe it was not correct what I said. The output of the encoder is smaller than the input because you perform convolution without padding. In my mind, since the decoder is only a conv layer, I didn't see how the size of the feature maps could increase to get back to the original size, hence I thought that you would have to crop the output. But probably there is padding done somehow there. An autoencoder reconstructs its input yes, but there can be small dimension variations due to padding or such...
@viorik , good afternoon~~ could you kindly tell me in FistaL1.lua
line 118, the vriable self.code
, where is the place the size setted ?
@gxyEPFL self.code is resized in SpatialConvFistaL1, lines 43-47. In that file you first create the object FistaL1 through parent.__init
, which declares self.code at line 118 as you observed.
Hi, i am new to torch, I want to do unsupervised learning of images. The input image size is (38_78) gray scale. the train dataset is a cuda tensor with size(1800_3878). For the convolution layer, the kernel filter size is (7 7), padding 0, stride by default is 1. Code as following: `
` At the end of the code, we upddatGradInput I get the size mismatch problem.
/install/share/lua/5.1/nn/WeightedMSECriterion.lua:10: bad argument #1 to 'copy' (sizes do not match at torch/extra/cutorch/lib/THC/generic/THCTensorCopy.cu:10) stack traceback: [C]: in function 'copy' ...ch/install/share/lua/5.1/nn/WeightedMSECriterion.lua:10: in function 'updateOutput' ...oxi/torch/install/share/lua/5.1/unsupgpu/FistaL1.lua:51: in function 'f' .../deguoxi/torch/install/share/lua/5.1/optim/fista.lua:83: in function 'FistaLS' ...oxi/torch/install/share/lua/5.1/unsupgpu/FistaL1.lua:119: in function 'updateOutput' .../torch/install/share/lua/5.1/unsupgpu/psd.lua:52: in function 'updateOutput' main.lua:182: in main chunk
I check the size equal of inputs and targets, and they are equal. Any clue to solve this problem?