Closed QQ-777777 closed 1 year ago
We used max updates to limit the epoch of pre-training and fine-tuning.
OK, thanks a lot!!!
Sorry, I have another question. When I want to use 8 GPUs training, I just change the parameters '--distributed-world-size 8' and finally the utilization rate of each GPU is similar(all 8 GPUs can work properly). However, I find that using 8 GPUs to train for one update is even slower than using one GPU. Do I need to modify other parameters ?Have you ever encountered such problem?
Actually, there are samples of --max-tokens
in each GPU and one update requires a forward on each GPU with communication between them. Thus, the time cost for one update is higher than that using a single GPU due to additional communication overhead.
OK, thanks for your reply!!
May I ask how much epoch is set during pre-training and fine-tuning?