Open wandering007 opened 6 years ago
Oooh @wandering007 good catch. I'll take a look.
@gpleiss This re-implementation (https://github.com/wandering007/efficient-densenet-pytorch) has good support for nn.DataParallel
, which may be helpful.
i submitted a pull request for this: https://github.com/gpleiss/efficient_densenet_pytorch/pull/39
Just merged in #39 . @wandering007 , can you confirm that this fixes the issue?
@gpleiss Yes, it works fine.
However, there is one thing that I've noticed before and have to mention, though it is out of the scope of this issue. With checkpointing feature, the whole autograd computation graph is broken into pieces. And the current nn.DataParallel
backward process is roughly doing 1) backward in each GPU asynchronously and 2) inter-GPU communication for collecting/gathering weight gradients for each autograd computation graph. That is, if one checkpoint contains weights for updating, there would be an inter-gpu synchronization process for accumulating the gradients in it, which is time-consuming. Considering that the current efficient DenseNet contains so many checkpointed nn.BatchNorm2d
modules, a lot of time will be spent on the inter-GPU communications for gradient accumulation. From my test, the backward of efficent DenseNet for multi-GPUs is at least 100x slower than the normal version...
@wandering007 hmmm that is problematic...
In general, I think that the checkpointing-based approach is probably what we should be doing moving forward. The original version was using some low-level calls which are no longer available in PyTorch. Using those low-level calls would require some C code, which is in my opinion undesirable for this package.
However, it sounds like the checkpointing-based code is practically unusable for the multi-GPU scenario. It's probably worthwhile bringing up an issue in the PyTorch repo about this. I'll see if there's a better solution in the meantime.
@gpleiss It may be tough for now...To be frank, I am still in favor of the previous implementation (v0.3.1) via _EfficientDensenetBottleneck
class and _DummyBackwardHookFn
function without touching any C code. I've just made some improvements on it and it seems very neat and workable for PyTorch v0.4. You can check https://github.com/wandering007/efficient-densenet-pytorch/tree/master/models if you are interested.
Maybe this issue could have been made more clear in the readme. I followed the implementation in my project but found it doesn't work with dataparallel ...
@yzcjtr you might be experiencing a different problem. According to my tests, this should work with DataParallel. Can you post the errors that you're seeing?
I just got the Segmentation fault (core dumped) error when running with multiple GPUs. Does anyone know how to solve this problem?
@theonegis can you provide more information? What version of PyTorch, what OS, what version of CUDA, what GPUs, etc.? Also, could you open up a new issue for this?
@gpleiss I have opened a new issues. Segmentation fault (core dumped) error for multiple GPUs. Thanks a lot.
Hi @gpleiss , really sorry for my previous misunderstanding. I'm confronted with a similar situation as @theonegis . I will provide more information in his new issue. Thanks.
The PyTorch official checkpointing is slow on MultiGPUs as explained by @wandering007 . https://github.com/csrhddlam/pytorch-checkpoint solves this issue.
I just want to benchmark the new implementation of efficient densenet with the code here. However, it seems that the used checkpointed modules are not broadcast to multiple GPUs as I got the following errors:
I think that the checkpoint feature provides weak support for
nn.DataParallel
.