exo-explore / exo

Run your own AI cluster at home with everyday devices 📱💻 🖥️⌚
GNU General Public License v3.0
6.56k stars 342 forks source link

Docker Image #173

Open Scot-Survivor opened 2 weeks ago

Scot-Survivor commented 2 weeks ago

Added some commits, and double-backed off of DanCodes, the original pull request that was closed.

Scot-Survivor commented 2 weeks ago

This will likely need a better cleanup from people who know how to write Docker files better than I do.

But I believe this is a good starting point.

AlexCheema commented 2 weeks ago

This looks great! Thanks for the contribution (also fixes #119)

Some small things:

Axodouble commented 2 weeks ago

It may be wise to provide 2 separate dockerfiles, as not all devices run NVIDIA GPU's, however I have not looked a lot at the source code, I assume Cuda isn't a fixed requirement?

dan-online commented 2 weeks ago

Heya, I'll be checking this out today, some context with the original PR is that I was just merging it to main in our fork late at night so I messed up the target howwweeeever, I'm glad to see it would be helpful here. Firstly I'll rebase this to resolve conflicts I'm seeing. As for your comments:

  • one Dockerfile for each target (Dockerfile-Mac, Dockerfile-NVIDIA, etc…)

I agree here, that's probably the best way to move forward, would you prefer it in say a docker/ folder or just at root? Personally I try to limit files at root but obviously if you have a preference I'll follow that.

  • What’s the thinking with continuous delivery? Official exo docker images on dockerhub?

Yep I can add a CD github action to this PR, just up to you guys to create an org and add the token to the repo action secrets.

  • It would be cool to have an example docker-compose.yml that can run a multi-node setup with networking set up properly

Great idea, this could also go in the aforementioned docker folder

  • Related to above: if we can run a multi-node test in CI that would be super

Up to you if you think this is in scope for this PR, I think possibly it's a nice-to-have so maybe for a future feature

AlexCheema commented 2 weeks ago

Heya, I'll be checking this out today, some context with the original PR is that I was just merging it to main in our fork late at night so I messed up the target howwweeeever, I'm glad to see it would be helpful here. Firstly I'll rebase this to resolve conflicts I'm seeing. As for your comments:

  • one Dockerfile for each target (Dockerfile-Mac, Dockerfile-NVIDIA, etc…)

I agree here, that's probably the best way to move forward, would you prefer it in say a docker/ folder or just at root? Personally I try to limit files at root but obviously if you have a preference I'll follow that.

At the root is fine.

  • What’s the thinking with continuous delivery? Official exo docker images on dockerhub?

Yep I can add a CD github action to this PR, just up to you guys to create an org and add the token to the repo action secrets.

We can create an org. Someone has already taken exolabs unfortunately, so I've requested to claim that name.

  • It would be cool to have an example docker-compose.yml that can run a multi-node setup with networking set up properly

Great idea, this could also go in the aforementioned docker folder

:)

  • Related to above: if we can run a multi-node test in CI that would be super

Up to you if you think this is in scope for this PR, I think possibly it's a nice-to-have so maybe for a future feature

Let's leave it to a future PR then. For now, the docker-compose.yml can serve as documentation / quick test locally.

Scot-Survivor commented 2 weeks ago

Does exo not use all available GPU's to the pc by default?

Why would someone want multi workers in a compose, compose only works with one host, it's not multi node orchestrated like Kubernetes

AlexCheema commented 2 weeks ago

Does exo not use all available GPU's to the pc by default?

Why would someone want multi workers in a compose, compose only works with one host, it's not multi node orchestrated like Kubernetes

exo does not use multi-gpu by default. If you have a single device with multiple GPUs you can (e.g. with the tinygrad backend) set VISIBLE_DEVICES={index} where {index} starts from 0 e.g. VISIBLE_DEVICES=1 for index 1. Or specifically for CUDA, this would be CUDA_VISIBLE_DEVICES={index}

dan-online commented 2 weeks ago

@AlexCheema Feel free to review!

Scot-Survivor commented 2 weeks ago

@dan-online , at least one other Dockerfile for none GPU accelerated computers would be useful (and to us)

Scot-Survivor commented 2 weeks ago

Did Alpine work? Ubuntu is massive. Python Alpine Base image should work?

dan-online commented 2 weeks ago

Alpine was- tricky so I pushed an ubuntu image first just to check if it would work before I try tackling alpine again

dan-online commented 2 weeks ago

It seems that tensorflow hates alpine so at least for today I'm giving up on this endeavour haha

AlexCheema commented 2 weeks ago

It seems that tensorflow hates alpine so at least for today I'm giving up on this endeavour haha

we shouldn't have a tensorflow dependency. when I run pip list tensorflow does not come up. why do we need tensorflow?

AlexCheema commented 2 weeks ago

Secured the exolabs dockerhub namespace now!

Scot-Survivor commented 1 week ago

It seems that tensorflow hates alpine so at least for today I'm giving up on this endeavour haha

we shouldn't have a tensorflow dependency. when I run pip list tensorflow does not come up. why do we need tensorflow?

@dan-online you got a chance to follow up today?

dan-online commented 1 week ago

Heya @AlexCheema it seems tensorflow (or similar) is requested upon boot:

tensorflow request