viorik / unsupgpu

cuda implementation of predictive sparse decomposition autoencoder
5 stars 7 forks source link

/nn/WeightedMSECriterion.lua:10: bad argument #1 to 'copy' #8

Open gxyEPFL opened 8 years ago

gxyEPFL commented 8 years ago

Hi, i am new to torch, I want to do unsupervised learning of images. The input image size is (38_78) gray scale. the train dataset is a cuda tensor with size(1800_3878). For the convolution layer, the kernel filter size is (7 7), padding 0, stride by default is 1. Code as following: `

-- for all models:
cmd:option('-model', 'conv-psd', 'auto-encoder class: linear | linear-psd | conv | conv-psd')
cmd:option('-inputsizeX', 38, 'sizeX of each input patch')
cmd:option('-inputsizeY', 78, 'sizeY of each input patch')
cmd:option('-nfiltersin', 1, 'number of input convolutional filters')
cmd:option('-nfiltersout', 16, 'number of output convolutional filters')
cmd:option('-lambda', 1, 'sparsity coefficient')
cmd:option('-beta', 1, 'prediction error coefficient')
cmd:option('-eta', 2e-3, 'learning rate')
cmd:option('-batchsize', 1, 'batch size')
cmd:option('-etadecay', 1e-5, 'learning rate decay')
cmd:option('-momentum', 0.9, 'gradient momentum')
cmd:option('-maxiter', 10000, 'max number of updates')

-- use hessian information for training:
cmd:option('-hessian', true, 'compute diagonal hessian coefficients to condition learning rates')
cmd:option('-hessiansamples', 500, 'number of samples to use to estimate hessian')
cmd:option('-hessianinterval', 10000, 'compute diagonal hessian coefs at every this many samples')
cmd:option('-minhessian', 0.02, 'min hessian to avoid extreme speed up')
cmd:option('-maxhessian', 500, 'max hessian to avoid extreme slow down')

-- for conv models:
cmd:option('-kernelsize', 7, 'size of convolutional kernels')

-- logging:
cmd:option('-statinterval', 5000, 'interval for saving stats and models')
cmd:option('-v', false, 'be verbose')
cmd:option('-display', false, 'display stuff')
cmd:option('-wcar', '', 'additional flag to differentiate this run')
cmd:text()

params = cmd:parse(arg)

rundir = cmd:string('psd', params, {dir=true})
params.rundir = params.dir .. '/' .. rundir

if paths.dirp(params.rundir) then
   os.execute('rm -r ' .. params.rundir)
end
os.execute('mkdir -p ' .. params.rundir)
cmd:addTime('psd')
cmd:log(params.rundir .. '/log.txt', params)

torch.setdefaulttensortype('torch.FloatTensor')

cutorch.setDevice(1) -- by default, use GPU 1
torch.manualSeed(params.seed)
local statinterval = torch.floor(params.statinterval / params.batchsize)*params.batchsize
local hessianinterval = torch.floor(params.hessianinterval / params.batchsize)*params.batchsize
print (statinterval)
print (hessianinterval)

--torch.manualSeed(params.seed)

torch.setnumthreads(params.threads)

----------------------------------------------------------------------
-- load data
dataset = torch.load('/torch/extra/unsupgpu/src/train.t7')

----------------------------------------------------------------------
-- create model
   local conntable = nn.tables.full(params.nfiltersin, params.nfiltersout)
   local kw, kh = params.kernelsize, params.kernelsize
   local W,H = params.inputsizeX, params.inputsizeY
   --local padw, padh = torch.floor(params.kernelsize/2.0), torch.floor(params.kernelsize/2.0)
   local padw, padh = 0, 0
   local batchSize = params.batchsize or 1
   -- connection table:
   local decodertable = conntable:clone()
   decodertable[{ {},1 }] = conntable[{ {},2 }]
   decodertable[{ {},2 }] = conntable[{ {},1 }] 
   local outputFeatures = conntable[{ {},2 }]:max()
   local inputFeatures = conntable[{ {},1 }]:max()

   -- encoder:
   encoder = nn.Sequential()
   encoder:add(nn.SpatialConvolution(inputFeatures,outputFeatures, kw, kh, 1, 1, padw, padh))
   encoder:add(nn.Tanh())

   encoder:add(nn.Diag(outputFeatures))

   -- decoder is L1 solution:
   print(kw, kh, W, H, padw, padh, params.lambda, batchSize) 
   print(decodertable)
   decoder = unsupgpu.SpatialConvFistaL1(decodertable, kw, kh, W, H, padw, padh, params.lambda, batchSize)

   -- PSD autoencoder
   module = unsupgpu.PSD(encoder, decoder, params.beta)

   module:cuda()
   -- convert dataset to convolutional (returns 1xKxK tensors (3D), instead of K*K (1D))
   --dataset:conv()

   -- verbose
   print('==> constructed convolutional predictive sparse decomposition (PSD) auto-encoder')

----------------------------------------------------------------------
-- trainable parameters
--

-- are we using the hessian?
if params.hessian then
   nn.hessian.enable()
   module:initDiagHessianParameters()
end

-- get all parameters
x,dl_dx,ddl_ddx = module:getParameters()

----------------------------------------------------------------------
-- train model
--

print('==> training model')

local avTrainingError = torch.FloatTensor(math.ceil(params.maxiter/params.statinterval)):zero()
local err = 0
local iter = 0

for t = 1,params.maxiter,params.batchsize do

   --------------------------------------------------------------------
   -- update diagonal hessian parameters
   --
   if params.hessian and math.fmod(t , hessianinterval) == 1 then
      -- some extra vars:
      local batchsize = params.batchsize
      local hessiansamples = params.hessiansamples
      local minhessian = params.minhessian
      local maxhessian = params.maxhessian
      local ddl_ddx_avg = ddl_ddx:clone(ddl_ddx):zero()
      etas = etas or ddl_ddx:clone()

      print('==> estimating diagonal hessian elements')

      for ih = 1,hessiansamples,batchsize do
        print ('==>')
        print (ih)
        local inputs  = torch.Tensor(params.batchsize,params.nfiltersin,params.inputsizeX,params.inputsizeY)
        local targets = torch.Tensor(params.batchsize,params.nfiltersin,params.inputsizeX,params.inputsizeY)
        for i = ih,ih+batchsize-1 do
          -- next
          local input  = dataset.data[i]
          --input:resize(torch.CudaTensor(1,3*96,160))
          local target = dataset.data[i]
          --target:resize(torch.CudaTensor(1,3*96,160))
          inputs[{i-ih+1,{},{},{}}] = input
          targets[{i-ih+1,{},{},{}}] = target
        end

        local inputs_ = inputs:cuda()
        local targets_ = targets:cuda()
        print (#inputs_)
        print (#targets_)
        print ("==> equal")
        print (torch.all(torch.eq(inputs_, targets_)))
        --print (torch.eq(inputs_, targets_))
        --module:updateGradInput(inputs_, targets_)

` At the end of the code, we upddatGradInput I get the size mismatch problem.

/install/share/lua/5.1/nn/WeightedMSECriterion.lua:10: bad argument #1 to 'copy' (sizes do not match at torch/extra/cutorch/lib/THC/generic/THCTensorCopy.cu:10) stack traceback: [C]: in function 'copy' ...ch/install/share/lua/5.1/nn/WeightedMSECriterion.lua:10: in function 'updateOutput' ...oxi/torch/install/share/lua/5.1/unsupgpu/FistaL1.lua:51: in function 'f' .../deguoxi/torch/install/share/lua/5.1/optim/fista.lua:83: in function 'FistaLS' ...oxi/torch/install/share/lua/5.1/unsupgpu/FistaL1.lua:119: in function 'updateOutput' .../torch/install/share/lua/5.1/unsupgpu/psd.lua:52: in function 'updateOutput' main.lua:182: in main chunk I check the size equal of inputs and targets, and they are equal. Any clue to solve this problem?

viorik commented 8 years ago

@gxyEPFL why did you set pad=0? Does it work with the original configuration? Just trying to localise the issue.

gxyEPFL commented 8 years ago

@viorik yes, if I turn the pad back tokernelsize/2, it works. Why I set pad = 0? I want to implement the paper http://soumith.ch/pedestrian-cvpr-13.pdf the input image is 38**78, with kernel size (7,7), I want to set padding 0, stride 1, so that the output each feature map is (32 72), which looks to be good number for later max pooling?

viorik commented 8 years ago

I see. When using pad=0, I'd say the problem comes from the definition of SpatialConvFistaL1, line 118 in my demo code, 4th and 5th parameters. Now they are given the size of the input (W H), which is used to compute the weight for the loss layer. But instead you should replace those values with the size you want for the output, so that the loss weights have the right size.
Let me know if you get it to work.

gxyEPFL commented 8 years ago

@viorik hi again, I tried to change the input of the decoder to (32,72) in my case. the 4th and 5th parameter of SpatialConvFistaL1, it's still size mismatch problem: /unsupgpu/FistaL1.lua:116: bad argument #1. Sorry...

viorik commented 8 years ago

Yes, several things probably need to be changed there. Did you un-pad the targets to have the reduced size? Targets should not have the same size as input. I see two options: either you look carefully into the code to understand better how it works and how lua/torch works in general (this was my first project in lua actually), or the hacky solution: add Narrow layer on the feature maps to keep only the central part, i.e. you would have to use Narrow twice, once to crop out the undesired columns, then the undesired rows, or the other way around.

gxyEPFL commented 8 years ago

@viorik , I am terribly sorry. I may misunderstand at the beginning.

Targets should not have the same size as input?

why? Could you explain a little bit, thanks For autoencoder, it's like (enocder input(38,78), encoder output(32,72)) => (decoder input(32, 72), decoder output(38, 78)) ? [https://github.com/koraykv/unsup/blob/master/AutoEncoder.lua] code line 36-44,

I once thought the input and target in unsupgpu is just the same, that's so called antoencoder... And i run the demo code, when training model, load the dataset, the input and target are the same...

viorik commented 8 years ago

Sorry, maybe it was not correct what I said. The output of the encoder is smaller than the input because you perform convolution without padding. In my mind, since the decoder is only a conv layer, I didn't see how the size of the feature maps could increase to get back to the original size, hence I thought that you would have to crop the output. But probably there is padding done somehow there. An autoencoder reconstructs its input yes, but there can be small dimension variations due to padding or such...

gxyEPFL commented 8 years ago

@viorik , good afternoon~~ could you kindly tell me in FistaL1.lua line 118, the vriable self.code, where is the place the size setted ?

viorik commented 8 years ago

@gxyEPFL self.code is resized in SpatialConvFistaL1, lines 43-47. In that file you first create the object FistaL1 through parent.__init, which declares self.code at line 118 as you observed.