This package provides a CUDA implementation for many of the modules in the base nn package: nn
git clone https://github.com/torch/cunn
cd cunn
luarocks make rocks/cunn-scm-1.rockspec
Simply convert your network model to CUDA by calling :cuda()
:
local model = nn.Sequential()
model:add(nn.Linear(2,2))
model:add(nn.LogSoftMax())
model:cuda() -- convert model to CUDA
... and similarly for your tensors:
local input = torch.Tensor(32,2):uniform()
input = input:cuda()
local output = model:forward(input)
... or create them directly as CudaTensor
s:
local input = torch.CudaTensor(32,2):uniform()
local output = model:forward(input)
luajit -l cunn -e 'cunn.test()'
Performance
CudaTensor
s once, at the start of the program,
and then simply copy data backwards and forwards
between main memory and existing CudaTensor
s
require 'cutorch'
local a = torch.CudaTensor(1000):uniform() for it=1,1000 do local b = torch.add(a, 1) end
... this will allocate one thousand new `CudaTensor`s, one for each call to `torch.add(a, 1)`.
Use instead this form:
```lua
require 'cutorch'
local a = torch.CudaTensor(1000):uniform()
local b = torch.CudaTensor(1000):uniform()
for it=1,1000 do
b:add(a, 1)
end
In this form, b
is allocated only once, before the loop. Then the b:add(a,1)
operation will perform
the add inside the GPU kernel, and store the result into the original b
CudaTensor
. This
will run noticeably faster, in general. It's also a lot less likely to eat up arbitrary amounts of memory,
and less likely to need frequent calls to collectgarbage(); collectgarbage()
.
Benchmarking
require 'cutorch'
local a = torch.CudaTensor(1000,1000):uniform()
a:add(1)
... the GPU kernel to add 1 will only be scheduled for launch by a:add(1)
. It might not have completed yet, or
even have reached the GPU, at the time that the a:add(1)
returns
cutorch.synchronize()
before each timecheck
point:
require 'cutorch'
require 'sys'
local a = torch.CudaTensor(1000,1000):uniform() cutorch.synchronize() start = sys.tic() a:add(1) cutorch.synchronize() print(sys.toc())