Open AlexCheema opened 3 months ago
Hi @AlexCheema , Can I take this up?
Hi @AlexCheema , Can I take this up?
Yes please! Added you to the sheet @pranav4501
Btw you mentioned you only have one device to test. You can still run multiple nodes on a single device. The easiest way is to do something like
python3 main.py —listen-port 5678 —broadcast-port 5679 —chatgpt-api-port 8000 —node-id “node1”
python3 main.py —listen-port 5679 —broadcast-port 5678 —chatgpt-api-port 8001 —node-id “node2”
This is a trick to make sure ports don’t conflict and the nodes can still discover each other.
you can also write tests which should be a faster way of iterating.
Yes, this works. Thank you Alex.
Hi @AlexCheema,
This is my understanding of the requirements and the model I am using. Please let me know of any changes to this.
Hi @AlexCheema,
- [ ] Stable diffusion (text to image) => Stable Diffusion v2-1
- [ ] MLX distributed inference
- [ ] Tinygrad distributed inference
This is my understanding of the requirements and the model I am using. Please let me know of any changes to this.
Looks good to me.
There's already examples for MLX and Tinygrad for stable diffusion v2. Example inference code for MLX: https://github.com/ml-explore/mlx-examples/tree/main/stable_diffusion Example inference code for Tinygrad: https://github.com/tinygrad/tinygrad/blob/master/examples/sdv2.py
Bumping. Any progress @pranav4501
Will require some core changes to how distributed inference works, hence higher bounty of $500. This would be a great contribution to exo.