issues
search
X-rayLaser
/
DistributedLLM
Run LLM inference by spliting models into parts and hosting each part on a separate machine. Project is no longer maintained.
MIT License
5
stars
0
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
failed to solve: process "/bin/sh -c make libllama.so && make libembdinput.so" did not complete successfully:
#2
galenyu
opened
5 months ago
1
Question: would this work on RPIs i.e. ARM CPUs?
#1
stevef1uk
opened
8 months ago
2