SaaS • Private Deployment • Docs • Discord
Private GenAI stack. Deploy the best of open AI in your own data center or VPC and retain complete data security & control.
Including support for RAG, API-calling and fine-tuning models that's as easy as drag'n'drop. Build and deploy LLM apps by writing a helix.yaml.
Looking for a private GenAI platform? From language models to image models and more, Helix brings the best of open source AI to your business in an ergonomic, scalable way, while optimizing the tradeoff between GPU memory and latency.
Use our quickstart installer:
curl -sL -O https://get.helix.ml/install.sh
chmod +x install.sh
sudo ./install.sh
The installer will prompt you before making changes to your system. By default, the dashboard will be available on http://localhost:8080
.
For setting up a deployment with a DNS name, see ./install.sh --help
or read the detailed docs. We've documented easy TLS termination for you.
Attach your own GPU runners per runners docs or use any external OpenAI-compatible LLM.
Use our helm charts:
For local development, refer to the Helix local development guide.
Helix is licensed under a similar license to Docker Desktop. You can run the source code (in this repo) for free for:
If you fall outside of these terms, please contact us to discuss purchasing a license for large commercial use. If you are an individual at a large company interested in experimenting with Helix, that's fine under Personal Use until you deploy to more than one GPU on company-owned or paid-for infrastructure.
You are not allowed to use our code to build a product that competes with us.
Contributions to the source code are welcome, and by contributing you confirm that your changes will fall under the same license.
If you would like to use some part of this code under a more permissive license, please get in touch.