YuanGongND / ltu

Code, Dataset, and Pretrained Models for Audio and Speech Large Language Model "Listen, Think, and Understand".
389 stars 36 forks source link

Question about Multi-GPU Training #19

Open dingdongwang opened 9 months ago

dingdongwang commented 9 months ago

Hi, I have a question about LTU-AS multi-GPU training, may I kindly ask if this repo support multiple GPU training? Since I didn't saw related configures (e.g. accelerate, deepspeed).

Thank you again and looking forward to your reply!

YuanGongND commented 9 months ago

All codes by default use multiple GPUs. HF handles that.

https://github.com/YuanGongND/ltu/blob/4589490e23f4fc5cb970b22a98a123688bbaa419/src/ltu_as/train_scripts/finetune_toy.sh#L18

https://github.com/YuanGongND/ltu/blob/4589490e23f4fc5cb970b22a98a123688bbaa419/src/ltu_as/finetune.py#L127

https://github.com/YuanGongND/ltu/blob/4589490e23f4fc5cb970b22a98a123688bbaa419/src/ltu_as/finetune.py#L107-L110

-Yuan