HarderThenHarder / transformers_tasks

⭐️ NLP Algorithms with transformers lib. Supporting Text-Classification, Text-Generation, Information-Extraction, Text-Matching, RLHF, SFT etc.
https://www.zhihu.com/column/c_1451236880973426688
2.11k stars 376 forks source link

关于多卡训练总iter数设置 #56

Closed HXZhong1997 closed 1 year ago

HXZhong1997 commented 1 year ago

https://github.com/HarderThenHarder/transformers_tasks/blob/c56bcc4a19d960cb9481ff13d796fad3c303d749/LLM/finetune/train_multi_gpu.py#L190 之前没有接触过llm微调。请问大佬,这里多卡训练的num_update_steps_per_epoch数没有除以(num_gpus*batch_size),是有什么原因吗?

HXZhong1997 commented 1 year ago

应该只会影响显示,实际训练流程不会受到影响。close了