helixml / helix

Multi-node production GenAI stack. Run the best of open source AI easily on your own servers. Easily add knowledge from documents and scrape websites. Create your own AI by fine-tuning open source models. Integrate LLMs with APIs. Run gptscript securely on the server
https://tryhelix.ai
Other
345 stars 29 forks source link
api finetuning function-calling golang gptscript helm k8s llama llm llm-agent llm-serving mistral mixtral openai openapi rag sdxl self-hosted stable-diffusion swagger
logo

SaaSPrivate DeploymentDocsDiscord

HelixML

👥 Discord

Private GenAI stack. Deploy the best of open AI in your own data center or VPC and retain complete data security & control.

Including support for RAG, API-calling and fine-tuning models that's as easy as drag'n'drop. Build and deploy LLM apps by writing a helix.yaml.

Looking for a private GenAI platform? From language models to image models and more, Helix brings the best of open source AI to your business in an ergonomic, scalable way, while optimizing the tradeoff between GPU memory and latency.

Install on Docker

Use our quickstart installer:

curl -sL -O https://get.helix.ml/install.sh
chmod +x install.sh
sudo ./install.sh

The installer will prompt you before making changes to your system. By default, the dashboard will be available on http://localhost:8080.

For setting up a deployment with a DNS name, see ./install.sh --help or read the detailed docs. We've documented easy TLS termination for you.

Attach your own GPU runners per runners docs or use any external OpenAI-compatible LLM.

Install on Kubernetes

Use our helm charts:

Developer Instructions

For local development, refer to the Helix local development guide.

License

Helix is licensed under a similar license to Docker Desktop. You can run the source code (in this repo) for free for:

If you fall outside of these terms, please contact us to discuss purchasing a license for large commercial use. If you are an individual at a large company interested in experimenting with Helix, that's fine under Personal Use until you deploy to more than one GPU on company-owned or paid-for infrastructure.

You are not allowed to use our code to build a product that competes with us.

Contributions to the source code are welcome, and by contributing you confirm that your changes will fall under the same license.

Why these clauses in your license?

If you would like to use some part of this code under a more permissive license, please get in touch.