torch / nn

Other
1.34k stars 967 forks source link

nn.GPU is incompatible with flattening parameters #1112

Open buttomnutstoast opened 7 years ago

buttomnutstoast commented 7 years ago

I used nn.GPU to allocate model on different GPUs. Things were going well until I flattened the parameters and gradParameters for optimization.

Here is my code:

local model = nn.Sequential()
    :add(nn.GPU(nn.Linear(10,5),1))
    :add(nn.GPU(nn.Linear(5,2),2))

model:cuda()
local param, gradParam = model:getParameters()

local input = torch.Tensor(3,10):uniform():cuda()
local output = model:forward(input)

The output error is something like Assertion THCTensor_(checkGPU)(state, 4, r_, t, m1, m2)' failed. at /tmp/luarocks_cutorch-scm-1-5710/cutorch/lib/THC/generic/THCTensorMathBlas.cu:195

Is there any possible solution to fix the problem? Otherwise, I have to go with optimization without flattening.

soumith commented 7 years ago

nn.GPU cant work with flattening, as two separate params are on two different GPUs -- their memory regions cant be flattened.