Open TiankaiHang opened 3 years ago
@TiankaiHang Hi! Thanks for your attention in our work!
- Will your model extended to Parallel (distributed data-parallel) in the future.
I'd love to do that, but I have never had access to a machine with more than 1 GPU in my entire life... So if you or anyone else can make a pull request to support this would be really nice...
- Why don't you try to use deeplabv3+, will it lead to a better result?
I believe a better model would lead to better results. But training V3/V3+ requires at least double the compute budget, that's why I did not do them. Also because the V2 results are still important for comparison against prior arts, that would lead to at least 3x compute budget if I chose to use V3/V3+ back then. I just do not have the cards.
Some additional info: On ResNet backbones, my experience tells me that V3+ could be worse than V3. For background: https://github.com/pytorch/vision/pull/2689
Thanks for your kind reply~ :-)
Best.
Thanks for your kind reply~ :-)
Best.
You're welcome. I'll pin this issue as a call for help to:
I will update it for multi-gpus after I reproduce it. Maybe next week, now the time is not enough.
I will update it for multi-gpus after I reproduce it. Maybe next week, now the time is not enough.
That's great to hear! Go for it!
I would suggest checking out: https://github.com/huggingface/accelerate which should be relatively easy to deploy any model in a distributed setting.
@lorenmt Good point! Thanks a lot!
@jinhuan-hit If you're still working on this, Accelerate
seems a good place to start. And it's perfectly ok if you don't want to send a PR just now. I'll update for multi-GPU myself when I got more bandwidth and cards for testing, it should be soon (when I get my internship).
@lorenmt Good point! Thanks a lot! @jinhuan-hit If you're still working on this,
Accelerate
seems a good place to start. And it's perfectly ok if you don't want to send a PR just now. I'll update for multi-GPU myself when I got more bandwidth and cards for testing, it should be soon (when I get my internship).
Yeah, thank you for your sharing and I am still working on this project. I'am sorry to say that I don't update it for multi-GPU until now. Something changed, I reproduce this project on another job. So the multi-GPU code is not match now. I'm trying to add multi-GPU on this project following Accelerate, today. Unfortunately, I have not solved this bug below.
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by (1) passing the keyword argument find_unused_parameters=True
to torch.nn.parallel.DistributedDataParallel
; (2) making sure all forward
function outputs participate in calculating loss. If you already have done the above two steps, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's forward
function. Please include the loss function and the structure of the return value of forward
of your module when reporting this issue (e.g. list, dict, iterable).
@jinhuan-hit Thanks a lot! I still don't have the hardware to debug multi-GPU for now. But hopefully I'll be able to debug this month/the next. The problem seems related to network design, I don't remember I have additional (unused) parameters though, I'll check that later tonight and get back to you.
@jinhuan-hit Thanks a lot! I still don't have the hardware to debug multi-GPU for now. But hopefully I'll be able to debug this month/the next. The problem seems related to network design, I don't remember I have additional (unused) parameters though, I'll check that later tonight and get back to you.
I have already check the network, but find nothing. Looking forward to hearing your good results!
I have already check the network, but find nothing.
Yes I think you're right. I also did not find redundant layers.
I'll also try investigate this when I got the cards.
@lorenmt Good point! Thanks a lot! @jinhuan-hit If you're still working on this,
Accelerate
seems a good place to start. And it's perfectly ok if you don't want to send a PR just now. I'll update for multi-GPU myself when I got more bandwidth and cards for testing, it should be soon (when I get my internship).Yeah, thank you for your sharing and I am still working on this project. I'am sorry to say that I don't update it for multi-GPU until now. Something changed, I reproduce this project on another job. So the multi-GPU code is not match now. I'm trying to add multi-GPU on this project following Accelerate, today. Unfortunately, I have not solved this bug below.
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by (1) passing the keyword argument
find_unused_parameters=True
totorch.nn.parallel.DistributedDataParallel
; (2) making sure allforward
function outputs participate in calculating loss. If you already have done the above two steps, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module'sforward
function. Please include the loss function and the structure of the return value offorward
of your module when reporting this issue (e.g. list, dict, iterable).
Have you tried to set find_unused_parameters=True
in your code?
Maybe you will get more detailed error information.
@lorenmt Good point! Thanks a lot! @jinhuan-hit If you're still working on this,
Accelerate
seems a good place to start. And it's perfectly ok if you don't want to send a PR just now. I'll update for multi-GPU myself when I got more bandwidth and cards for testing, it should be soon (when I get my internship).Yeah, thank you for your sharing and I am still working on this project. I'am sorry to say that I don't update it for multi-GPU until now. Something changed, I reproduce this project on another job. So the multi-GPU code is not match now. I'm trying to add multi-GPU on this project following Accelerate, today. Unfortunately, I have not solved this bug below. RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by (1) passing the keyword argument
find_unused_parameters=True
totorch.nn.parallel.DistributedDataParallel
; (2) making sure allforward
function outputs participate in calculating loss. If you already have done the above two steps, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module'sforward
function. Please include the loss function and the structure of the return value offorward
of your module when reporting this issue (e.g. list, dict, iterable).Have you tried to set
find_unused_parameters=True
in your code? Maybe you will get more detailed error information.
Yeah, you are right! I warp the network using DDP with find_unused_parameters=True by myself, but it doesn't work. However, when I add find_unused_parameters=True to the function prepare of accelerator in Accelerate package, the job works well. Unfortunately, I'm sorry to say that I have not verify the result. The package version I used: torch==1.4.0, torchvision==0.5.0, accelerate==0.1.0.
def prepare_model(self, model):
if self.device_placement:
model = model.to(self.device)
if self.distributed_type == DistributedType.MULTI_GPU:
model = torch.nn.parallel.DistributedDataParallel(
model,
device_ids=[self.local_process_index],
output_device=self.local_process_index,
find_unused_parameters=True,
)
if self.native_amp:
model.forward = torch.cuda.amp.autocast()(model.forward)
return model
Also, I change the main.py following https://github.com/huggingface/accelerate
1.change
device = torch.device('cpu')
if torch.cuda.is_available():
device = torch.device('cuda:0')
to
# modify to accelerator
accelerator = Accelerator()
device = accelerator.device
2.add
# modify to accelerator
net, optimizer = accelerator.prepare(net, optimizer)
3.change
scaled_loss.backward()
loss.backward()
to
accelerator.backward(scaled_loss)
accelerator.backward(loss)
Then it should work. Best wishes.
@jinhuan-hit If the results are similar compared to single card under mixed precision, maybe you'd like to send a pull request for this?
@jinhuan-hit If the results are similar compared to single card under mixed precision, maybe you'd like to send a pull request for this?
Yeah, I'm checking the results now. If OK, I'd like to send a PR.
Thanks a lot! If a PyTorch version update that concerns code change is necessary for using Accelerate, please make the version update & multi-GPU in 2 separate PRs, if possible (one PR is also good).
Thanks a lot! If a PyTorch version update that concerns code change is necessary for using Accelerate, please make the version update & multi-GPU in 2 separate PRs, if possible (one PR is also good).
I use pytorch1.4.0 because of Accelerate. Now I'm using fp32 in training and it works well without any code modification.
I have checked the result and it looks normal!
I have checked the result and it looks normal!
Great! I'll formulate a draft PR for comments.
Thanks for everyone's help! DDP is now supported. Please report bugs if you've found any.
Thanks for your nice work and congratulations on your good results!
I have several questions.
Best.