TinyLLaVA / TinyLLaVA_Factory

A Framework of Small-scale Large Multimodal Models
https://arxiv.org/abs/2402.14289
Apache License 2.0
658 stars 68 forks source link

distributed computing #93

Open 1764758458 opened 4 months ago

1764758458 commented 4 months ago

Hi, what do I need to change the code if I want to parallelize the computation with 8 gpu's

jiajunlong commented 4 months ago

You only need to modify the GPU configuration in the DeepSpeed launch scripts for pretraining and finetuning. For example, change deepspeed --include localhost:4,5,6,7 in pretrain.sh to deepspeed --include localhost:0,1,2,3,4,5,6,7.