microsoft / SpeechT5

Unified-Modal Speech-Text Pre-Training for Spoken Language Processing
MIT License
1.09k stars 113 forks source link

SpeechT5:how much epoch is set #45

Closed QQ-777777 closed 1 year ago

QQ-777777 commented 1 year ago

May I ask how much epoch is set during pre-training and fine-tuning?

mechanicalsea commented 1 year ago

We used max updates to limit the epoch of pre-training and fine-tuning.

QQ-777777 commented 1 year ago

OK, thanks a lot!!!

QQ-777777 commented 1 year ago

Sorry, I have another question. When I want to use 8 GPUs training, I just change the parameters '--distributed-world-size 8' and finally the utilization rate of each GPU is similar(all 8 GPUs can work properly). However, I find that using 8 GPUs to train for one update is even slower than using one GPU. Do I need to modify other parameters ?Have you ever encountered such problem?

mechanicalsea commented 1 year ago

Actually, there are samples of --max-tokens in each GPU and one update requires a forward on each GPU with communication between them. Thus, the time cost for one update is higher than that using a single GPU due to additional communication overhead.

QQ-777777 commented 1 year ago

OK, thanks for your reply!!