gpleiss / efficient_densenet_pytorch

A memory-efficient implementation of DenseNets
MIT License
1.51k stars 329 forks source link

MultiGPU efficient densenets are slow #36

Open wandering007 opened 6 years ago

wandering007 commented 6 years ago

I just want to benchmark the new implementation of efficient densenet with the code here. However, it seems that the used checkpointed modules are not broadcast to multiple GPUs as I got the following errors:

  File "/home/changmao/efficient_densenet_pytorch/models/densenet.py", line 16, in bn_function
    bottleneck_output = conv(relu(norm(concated_features)))
  File "/home/changmao/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/changmao/anaconda3/lib/python3.6/site-packages/torch/nn/modules/batchnorm.py", line 49, in forward
    self.training or not self.track_running_stats, self.momentum, self.eps)
  File "/home/changmao/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py", line 1194, in batch_norm
    training, momentum, eps, torch.backends.cudnn.enabled
RuntimeError: Expected tensor for argument #1 'input' to have the same device as tensor for argument #2 'weight'; but device 1 does not equal 0 (while checking arguments for cudnn_batch_norm)

I think that the checkpoint feature provides weak support for nn.DataParallel.

gpleiss commented 6 years ago

Oooh @wandering007 good catch. I'll take a look.

wandering007 commented 6 years ago

@gpleiss This re-implementation (https://github.com/wandering007/efficient-densenet-pytorch) has good support for nn.DataParallel, which may be helpful.

ZhengRui commented 6 years ago

i submitted a pull request for this: https://github.com/gpleiss/efficient_densenet_pytorch/pull/39

gpleiss commented 6 years ago

Just merged in #39 . @wandering007 , can you confirm that this fixes the issue?

wandering007 commented 6 years ago

@gpleiss Yes, it works fine.
However, there is one thing that I've noticed before and have to mention, though it is out of the scope of this issue. With checkpointing feature, the whole autograd computation graph is broken into pieces. And the current nn.DataParallel backward process is roughly doing 1) backward in each GPU asynchronously and 2) inter-GPU communication for collecting/gathering weight gradients for each autograd computation graph. That is, if one checkpoint contains weights for updating, there would be an inter-gpu synchronization process for accumulating the gradients in it, which is time-consuming. Considering that the current efficient DenseNet contains so many checkpointed nn.BatchNorm2d modules, a lot of time will be spent on the inter-GPU communications for gradient accumulation. From my test, the backward of efficent DenseNet for multi-GPUs is at least 100x slower than the normal version...

gpleiss commented 6 years ago

@wandering007 hmmm that is problematic...

In general, I think that the checkpointing-based approach is probably what we should be doing moving forward. The original version was using some low-level calls which are no longer available in PyTorch. Using those low-level calls would require some C code, which is in my opinion undesirable for this package.

However, it sounds like the checkpointing-based code is practically unusable for the multi-GPU scenario. It's probably worthwhile bringing up an issue in the PyTorch repo about this. I'll see if there's a better solution in the meantime.

wandering007 commented 6 years ago

@gpleiss It may be tough for now...To be frank, I am still in favor of the previous implementation (v0.3.1) via _EfficientDensenetBottleneck class and _DummyBackwardHookFn function without touching any C code. I've just made some improvements on it and it seems very neat and workable for PyTorch v0.4. You can check https://github.com/wandering007/efficient-densenet-pytorch/tree/master/models if you are interested.

yzcjtr commented 5 years ago

Maybe this issue could have been made more clear in the readme. I followed the implementation in my project but found it doesn't work with dataparallel ...

gpleiss commented 5 years ago

@yzcjtr you might be experiencing a different problem. According to my tests, this should work with DataParallel. Can you post the errors that you're seeing?

theonegis commented 5 years ago

I just got the Segmentation fault (core dumped) error when running with multiple GPUs. Does anyone know how to solve this problem?

gpleiss commented 5 years ago

@theonegis can you provide more information? What version of PyTorch, what OS, what version of CUDA, what GPUs, etc.? Also, could you open up a new issue for this?

theonegis commented 5 years ago

@gpleiss I have opened a new issues. Segmentation fault (core dumped) error for multiple GPUs. Thanks a lot.

yzcjtr commented 5 years ago

Hi @gpleiss , really sorry for my previous misunderstanding. I'm confronted with a similar situation as @theonegis . I will provide more information in his new issue. Thanks.

csrhddlam commented 5 years ago

The PyTorch official checkpointing is slow on MultiGPUs as explained by @wandering007 . https://github.com/csrhddlam/pytorch-checkpoint solves this issue.