torch / demos

Demos and tutorials around Torch7.
355 stars 301 forks source link

person-detector: PyramidUnPacker failed for 100*100 object detection #15

Closed laotao closed 9 years ago

laotao commented 9 years ago

Hi,

I'm reusing person-detector for a new object detection problem. The object size is 100 by 100 instead of 46 by 46. I've replaced the 46's in model.lua and PyramidUnpacker.lua with 100's. The training process seems OK and I've got the model.net.

However, when I run rundemo.lua, the following error occurred: /usr/local/bin/luajit: ./PyramidUnPacker.lua:97: bad argument #4 to 'narrow' (out of range) stack traceback: [C]: in function 'narrow' ./PyramidUnPacker.lua:97: in function 'forward' rundemo.lua:108: in function 'process' rundemo.lua:164: in main chunk [C]: in function 'dofile' /usr/local/lib/luarocks/rocks/trepl/scm-1/bin/th:129: in main chunk [C]: at 0x00404b50

The hyper-parameters I used in model.lua: -- input dimensions: 18 local nfeats = 3 19 local width = 100 20 local height = 100 21 22 -- hidden units, filter sizes (for ConvNet only): 23 local nstates = {32,64,128,128} 24 local filtsize = {11,11,10} 25 local poolsize1 = 3 26 local poolsize = 2

Would anyone please have a look at this?

Thanks Tao

Aysegul commented 9 years ago

I guess you are using 2 different pooling with sizes of 3 and 2. Then your self.step_width and self.step_height should be changed to 6 (3x2) both in PyramidPacker and PyramidUnPacker.

laotao commented 9 years ago

@Aysegul Thanks for your prompt help. It works now!

laotao commented 9 years ago

Sorry @Aysegul , there seems to be another problem. The object detected seems to be squeezed to the upper left corner (see the attached image below). Is this an error caused by my model or the packing-unpacking process?

slide

Thanks Tao

laotao commented 9 years ago

@Aysegul I figured this out now. 'network_sub' in rundemo.lua also need to be changed to 6.

Thanks!