Multi-node production AI stack. Run the best of open source AI easily on your own servers. Create your own AI by fine-tuning open source models. Integrate LLMs with APIs. Run gptscript securely on the server
It would be great if you could provide the URL or container name for an existing Ollama server and have Helix use it's native API (rather than the openai compatible API).
The native API offers quite a lot more than the openai compatible API which should really only be used as a last resort for incompatible apps.
It would be great if you could provide the URL or container name for an existing Ollama server and have Helix use it's native API (rather than the openai compatible API).
The native API offers quite a lot more than the openai compatible API which should really only be used as a last resort for incompatible apps.