b4rtaz / distributed-llama

Tensor parallelism is all you need. Run LLMs on weak devices or make powerful devices even more powerful by distributing the workload and dividing the RAM usage.
MIT License
1.03k stars 69 forks source link

Fix typo #13

Closed VIS-WA closed 2 months ago

VIS-WA commented 3 months ago

The description for the root node command was wrongly copied.

b4rtaz commented 2 months ago

Thanks for this PR. Meantime I've fixed this typo in other PR.