torch / cunn

Other
216 stars 174 forks source link

CUDA backend for the Neural Network Package

This package provides a CUDA implementation for many of the modules in the base nn package: nn

Installing from source

git clone https://github.com/torch/cunn
cd cunn
luarocks make rocks/cunn-scm-1.rockspec

To use

Simply convert your network model to CUDA by calling :cuda():

local model = nn.Sequential()
model:add(nn.Linear(2,2))
model:add(nn.LogSoftMax())

model:cuda()  -- convert model to CUDA

... and similarly for your tensors:

local input = torch.Tensor(32,2):uniform()
input = input:cuda()
local output = model:forward(input)

... or create them directly as CudaTensors:

local input = torch.CudaTensor(32,2):uniform()
local output = model:forward(input)

To run unit-tests

luajit -l cunn -e 'cunn.test()'

GPU Training Concepts

Performance

local a = torch.CudaTensor(1000):uniform() for it=1,1000 do local b = torch.add(a, 1) end

... this will allocate one thousand new `CudaTensor`s, one for each call to `torch.add(a, 1)`.

Use instead this form:
```lua
require 'cutorch'

local a = torch.CudaTensor(1000):uniform()
local b = torch.CudaTensor(1000):uniform()
for it=1,1000 do
  b:add(a, 1)
end

In this form, b is allocated only once, before the loop. Then the b:add(a,1) operation will perform the add inside the GPU kernel, and store the result into the original b CudaTensor. This will run noticeably faster, in general. It's also a lot less likely to eat up arbitrary amounts of memory, and less likely to need frequent calls to collectgarbage(); collectgarbage().

Benchmarking

local a = torch.CudaTensor(1000,1000):uniform() cutorch.synchronize() start = sys.tic() a:add(1) cutorch.synchronize() print(sys.toc())