Open ohmeow opened 10 months ago
Hi @ohmeow this looks like an issue with the model taking too long to push to the Hub before the 30min timeout from accelerate
kicked in - you by any chance know if your upload speed was bottlenecked?
One thing you can do is tweak the timeout when the accelerator is instantiated as follows, e.g.
# Increase distributed timeout to 3h to enable push to Hub to complete
accelerator = Accelerator(kwargs_handlers=[InitProcessGroupKwargs(timeout=timedelta(seconds=6 * 1800))])
I'll try that. What's funny is that it looks like all the file get uploaded ... it just gets stuck and eventually times out.
Same here, everything's pushed to the HuggingFace Hub after fine-tuning but then the run crashes for no reason, so removing the integrated push_to_hub
temporarily and running it manually to avoid the run from crashing (even if succeeding)
Thanks for checking @alvarobartt - this is very strange and I can't reproduce on my setup 🤔 . On how many nodes / GPUs are you running on?
I think that the problem is that evaluation is fairly long is beyond 30 min timeout. It then should reproduce on low GPU count.
Moreover I wasn't able to increase the timeout by passing parameter to Accelerate as proposed
Thanks for checking @alvarobartt - this is very strange and I can't reproduce on my setup 🤔 . On how many nodes / GPUs are you running on?
I tried out your suggestion to further explore that because was seeing the same when push_to_hub=True
, see your suggestion below:
# Increase distributed timeout to 3h to enable push to Hub to complete
accelerator = Accelerator(kwargs_handlers=[InitProcessGroupKwargs(timeout=timedelta(seconds=6 * 1800))])
But it kept on failing on 8 x A100 both 40Gb and 80Gb, even failed in 8 x H100 80Gb, I adjusted the timeouts so that the fine-tunes could be pushed to the Hub, but got no success even though everything was pushed indeed.
Hi folks, I was able to repro the issue and AFAICT it only happens for full training (i.e. with ZeRO-3) and not with QLoRA (DDP).
The solution I've implemented in the linked PR above is to pull the push_to_hub()
call outside the main process since this seems to be the source of conflict between the trainer internals which have their own checks to see which process this is being run from. Let me know if that helps once #88 is merged!
Here's the call I'm using to run the script:
Here's the full trace of the error:
Any ideas how to resolve?
Thanks