Closed lewismacnow closed 1 week ago
As the title suggests, I propose you add functionality to support using a unique RAG Ollama URL. The outcome of this would mean inference and embedding requests could be sent to different hosts.
Yes, this is our most requested feature. I will try to add it in this week's update
Now you can add multiple Ollama instances via OpenAI API-compatible settings. :)
As the title suggests, I propose you add functionality to support using a unique RAG Ollama URL. The outcome of this would mean inference and embedding requests could be sent to different hosts.