I used nn.GPU to allocate model on different GPUs. Things were going well until I flattened the parameters and gradParameters for optimization.
Here is my code:
local model = nn.Sequential()
:add(nn.GPU(nn.Linear(10,5),1))
:add(nn.GPU(nn.Linear(5,2),2))
model:cuda()
local param, gradParam = model:getParameters()
local input = torch.Tensor(3,10):uniform():cuda()
local output = model:forward(input)
The output error is something like
Assertion THCTensor_(checkGPU)(state, 4, r_, t, m1, m2)' failed. at /tmp/luarocks_cutorch-scm-1-5710/cutorch/lib/THC/generic/THCTensorMathBlas.cu:195
Is there any possible solution to fix the problem? Otherwise, I have to go with optimization without flattening.
I used
nn.GPU
to allocate model on different GPUs. Things were going well until I flattened the parameters and gradParameters for optimization.Here is my code:
The output error is something like
Assertion THCTensor_(checkGPU)(state, 4, r_, t, m1, m2)' failed. at /tmp/luarocks_cutorch-scm-1-5710/cutorch/lib/THC/generic/THCTensorMathBlas.cu:195
Is there any possible solution to fix the problem? Otherwise, I have to go with optimization without flattening.