exo-explore / exo

Run your own AI cluster at home with everyday devices 📱💻 🖥️⌚
GNU General Public License v3.0
6.56k stars 341 forks source link
exo logo exo: Run your own AI cluster at home with everyday devices. Maintained by [exo labs](https://x.com/exolabs_).

[Discord](https://discord.gg/EUnjGpsmWw) | [Telegram](https://t.me/+Kh-KqHTzFYg3MGNk) | [X](https://x.com/exolabs_)

[![GitHub Repo stars](https://img.shields.io/github/stars/exo-explore/exo)](https://github.com/exo-explore/exo/stargazers) [![Tests](https://dl.circleci.com/status-badge/img/circleci/TrkofJDoGzdQAeL6yVHKsg/4i5hJuafuwZYZQxbRAWS71/tree/main.svg?style=svg)](https://dl.circleci.com/status-badge/redirect/circleci/TrkofJDoGzdQAeL6yVHKsg/4i5hJuafuwZYZQxbRAWS71/tree/main) [![License: GPL v3](https://img.shields.io/badge/License-GPLv3-blue.svg)](https://www.gnu.org/licenses/gpl-3.0)

Forget expensive NVIDIA GPUs, unify your existing devices into one powerful GPU: iPhone, iPad, Android, Mac, Linux, pretty much any device!

Update: exo is hiring. See here for more details.

Get Involved

exo is experimental software. Expect bugs early on. Create issues so they can be fixed. The exo labs team will strive to resolve issues quickly.

We also welcome contributions from the community. We have a list of bounties in this sheet.

Features

Wide Model Support

exo supports LLaMA (MLX and tinygrad) and other popular models.

Dynamic Model Partitioning

exo optimally splits up models based on the current network topology and device resources available. This enables you to run larger models than you would be able to on any single device.

Automatic Device Discovery

exo will automatically discover other devices using the best method available. Zero manual configuration.

ChatGPT-compatible API

exo provides a ChatGPT-compatible API for running models. It's a one-line change in your application to run models on your own hardware using exo.

Device Equality

Unlike other distributed inference frameworks, exo does not use a master-worker architecture. Instead, exo devices connect p2p. As long as a device is connected somewhere in the network, it can be used to run models.

Exo supports different partitioning strategies to split up a model across devices. The default partitioning strategy is ring memory weighted partitioning. This runs an inference in a ring where each device runs a number of model layers proportional to the memory of the device.

ring topology

## Installation The current recommended way to install exo is from source. ### Prerequisites - Python>=3.12.0 is required because of [issues with asyncio](https://github.com/exo-explore/exo/issues/5) in previous versions. ### From source ```sh git clone https://github.com/exo-explore/exo.git cd exo pip install . # alternatively, with venv source install.sh ``` ### Troubleshooting - If running on Mac, MLX has an [install guide](https://ml-explore.github.io/mlx/build/html/install.html) with troubleshooting steps. ## Documentation ### Example Usage on Multiple MacOS Devices #### Device 1: ```sh python3 main.py ``` #### Device 2: ```sh python3 main.py ``` That's it! No configuration required - exo will automatically discover the other device(s). exo starts a ChatGPT-like WebUI (powered by [tinygrad tinychat](https://github.com/tinygrad/tinygrad/tree/master/examples/tinychat)) on http://localhost:8000 For developers, exo also starts a ChatGPT-compatible API endpoint on http://localhost:8000/v1/chat/completions. Example with curls: ```sh curl http://localhost:8000/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "llama-3.1-8b", "messages": [{"role": "user", "content": "What is the meaning of exo?"}], "temperature": 0.7 }' ``` ```sh curl http://localhost:8000/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "llava-1.5-7b-hf", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "What are these?" }, { "type": "image_url", "image_url": { "url": "http://images.cocodataset.org/val2017/000000039769.jpg" } } ] } ], "temperature": 0.0 }' ``` ### Example Usage on Multiple Heterogenous Devices (MacOS + Linux) #### Device 1 (MacOS): ```sh python3 main.py --inference-engine tinygrad ``` Here we explicitly tell exo to use the **tinygrad** inference engine. #### Device 2 (Linux): ```sh python3 main.py ``` Linux devices will automatically default to using the **tinygrad** inference engine. You can read about tinygrad-specific env vars [here](https://docs.tinygrad.org/env_vars/). For example, you can configure tinygrad to use the cpu by specifying `CLANG=1`. ## Debugging Enable debug logs with the DEBUG environment variable (0-9). ```sh DEBUG=9 python3 main.py ``` For the **tinygrad** inference engine specifically, there is a separate DEBUG flag `TINYGRAD_DEBUG` that can be used to enable debug logs (1-6). ```sh TINYGRAD_DEBUG=2 python3 main.py ``` ## Known Issues - 🚧 As the library is evolving so quickly, the iOS implementation has fallen behind Python. We have decided for now not to put out the buggy iOS version and receive a bunch of GitHub issues for outdated code. We are working on solving this properly and will make an announcement when it's ready. If you would like access to the iOS implementation now, please email alex@exolabs.net with your GitHub username explaining your use-case and you will be granted access on GitHub. ## Inference Engines exo supports the following inference engines: - ✅ [MLX](exo/inference/mlx/sharded_inference_engine.py) - ✅ [tinygrad](exo/inference/tinygrad/inference.py) - 🚧 [llama.cpp](TODO) ## Networking Modules - ✅ [GRPC](exo/networking/grpc) - 🚧 [Radio](TODO) - 🚧 [Bluetooth](TODO)