Hi, all
When I try to implement nn.DataParallelTable on nn.Sequencer for multi-GPU training, it gives me errors. Is this package compatible with nn.DataParallelTable? I tried the solution given by multigpu-nce-rnnlm.lua, but using nn.GPU is not efficient. Only one GPU uses a lot of memory, I guess this is because all the grad parameters is saved to this GPU. Is there any suggestions to for Multi-GPU support?
Hi, all When I try to implement nn.DataParallelTable on nn.Sequencer for multi-GPU training, it gives me errors. Is this package compatible with nn.DataParallelTable? I tried the solution given by multigpu-nce-rnnlm.lua, but using nn.GPU is not efficient. Only one GPU uses a lot of memory, I guess this is because all the grad parameters is saved to this GPU. Is there any suggestions to for Multi-GPU support?