torch / cutorch

A CUDA backend for Torch7
Other
337 stars 208 forks source link

cutorch.withDevice not using multiple gpus #812

Open JJanai opened 6 years ago

JJanai commented 6 years ago

Running the following script for instantiating a model on multiple GPUs shows that only one GPU is in use and not the others.

for i=1,cutorch.getDeviceCount() do print('GPU' ..i .. ': ' ..cutorch.getMemoryUsage(i)) end 

GPU1: 5619005952 
GPU2: 5757796352
GPU3: 5757796352
GPU4: 5757796352

model = nn.Sequential()
for i = 1, cutorch.getDeviceCount() do
    cutorch.withDevice(i, function() model:add(nn.SpatialConvolution(3, 3, 3, 5)) end)
    cutorch.withDevice(i, function() model:add(nn.ReLU(true)) end)
end

model = model:cuda()

for i=1,cutorch.getDeviceCount() do print('GPU' ..i .. ': ' ..cutorch.getMemoryUsage(i)) end 

GPU1: 5618993664      
GPU2: 5757796352      
GPU3: 5757796352      
GPU4: 5757796352      

As we see, only GPU1 is being used to store the model. Whereas, sequential parts of the model were defined on all 4 GPUs. Am I missing something here?