Closed heqilearning closed 5 years ago
Hi heqilearning, thank you for your interest in our work. Our code does not support multi-GPU training and we do not have a plan to support it in the near future. But our code should be easily extended to multiple GPUs. Please refer to the pytorch documents (e.g. this tutorial) for the implementation. My suggestion (just as a starting point) is to change line 17-18 of adn/models/adn.py to a parallel version, i.e.,
# model_dict = dict(
# adn = lambda: ADN(**g_opts),
# nlayer = lambda: NLayerDiscriminator(**d_opts))
model_dict = dict(
adn = lambda: nn.DataParallel(ADN(**g_opts)),
nlayer = lambda: nn.DataParallel(NLayerDiscriminator(**d_opts)))
There might be some other places needed to be changed. I do not get a chance to check them thoroughly at the moment.
Hello, thanks for your work, it helps me a lot! I would like to ask if your code can be trained on multiple GPUs? I tried to modify the code, but it still didn't work on multiple GPUs.