Closed ELigoP closed 2 months ago
Hi! Since Llama 3 has the same architecture, it should be supported out of the box. In fact, https://health.petals.dev/ shows that some people have been hosting Llama 3 in the public swarm for a while now.
Is there anything that we need to do to improve the support? I'd be happy for any contributions if you wish to update the docs, but all the other components seem to work smoothl
Haven't noticed its mentioned already. Yes, it seems to work. Waiting for Meta approval to download model from hugging face. By the way, amazing project, I am confused why people don't seem to actually use it. Plan to agitate as much people with GPUs as possible to dedicate their GPUs to this.
Ok, now I released that most people must be using private swarms...
This is currently more capable model than LLama-2-70B. It would be nice to switch to that instead.