Closed fedelrick closed 1 year ago
The current version does not support CPU distribution. It is possible using Wrapyfi though, since it handles CPU and GPU tensors.
and
for both calls, you need to pass a device="cpu"
argument, but you'd still have to modify the original implementation to run on CPU. Wrapyfi only adds a layer that allows you to distribute tensors across devices/machines, and I merely adapted llama to demonstrate that capability.
This isnt really an issue, but im trying to find a method to link multiple mobile/laptop devices together to piggyback of each CPU essentially. Is it doable with this fork? Any suggestions and tips would be welcome!