b4rtaz / distributed-llama

Tensor parallelism is all you need. Run LLMs on weak devices or make powerful devices even more powerful by distributing the workload and dividing the RAM usage.
MIT License
1.02k stars 68 forks source link

How To Add Suppoerted Model #55

Open hyperbolic-c opened 1 month ago

hyperbolic-c commented 1 month ago

@b4rtaz Hey, thank you for your wonderful work. Could you please offer some details about how to add supported model? For example, how to split the network according to structure of model. It is difficult to work without your help! THANKS!

b4rtaz commented 1 month ago

Hello @hyperbolic-c, sorry I don't understand your question.

how to split the network according to structure of model

You don't need to adjust the network topology to the model. Just join together 2^n computers via Ethernet (you may need a switch) and that's it. Now you need to pass only the ip addresses of worker nodes to start the root node.

./main inference --model ../dllama_llama-2-7b_q40.bin ... --workers 10.0.0.2:9998
hyperbolic-c commented 1 month ago

@b4rtaz Sorry for the lack of clarity. Actually, I want to say how to convert the model network layer to distributed-llama format, converting open source models other than llama2 or llama3. Thanks for your reply!