e-lab / Torch7-profiling

State Of The Art deep neural network models
37 stars 11 forks source link

Spatial breaks on TK1 #2

Closed Atcold closed 9 years ago

Atcold commented 9 years ago
ubuntu@tegra-ubuntu ~/Downloads/Torch7-profiling [master*]$ th profile-model.lua -m models/td-net-large.lua -h 1280 -w 720
Building Net Large model from model...

Convert network to cpu spatial  
/usr/local/bin/luajit: ./src/spatial.lua:69: attempt to call method 'cuda' (a nil value)
stack traceback:
    ./src/spatial.lua:69: in function 'net_spatial'
    profile-model.lua:67: in main chunk
    [C]: in function 'dofile'
    /usr/local/lib/luarocks/rocks/trepl/scm-1/bin/th:131: in main chunk
    [C]: at 0x0000d055
bmartini commented 9 years ago

The problem was noticed a few weeks ago, there was a fix but we never finish testing it or pushing it.

The problem also exists in the apps/generic. Basically the spatial module should not have to deal with cuda but should instead convert the cpu network and then let the cuda conversion happen later.

On Wed, Jul 22, 2015 at 11:07 AM, Alfredo Canziani <notifications@github.com

wrote:

ubuntu@tegra-ubuntu ~/Downloads/Torch7-profiling [master*]$ th profile-model.lua -m models/td-net-large.lua -h 1280 -w 720 Building Net Large model from model...

Convert network to cpu spatial /usr/local/bin/luajit: ./src/spatial.lua:69: attempt to call method 'cuda' (a nil value) stack traceback: ./src/spatial.lua:69: in function 'net_spatial' profile-model.lua:67: in main chunk [C]: in function 'dofile' /usr/local/lib/luarocks/rocks/trepl/scm-1/bin/th:131: in main chunk [C]: at 0x0000d055

— Reply to this email directly or view it on GitHub https://github.com/e-lab/Torch7-profiling/issues/2.

Atcold commented 9 years ago

@vgokhale said I can track the problem and fix it myself. I think, since you had already done so, this will take you less time.

Atcold commented 9 years ago

Do you think someone (@vgokhale or you) can get this fixed?

vgokhale commented 9 years ago

Fixed.

Atcold commented 9 years ago

@vgokhale thank you.