When I configure OpenAI, I only see a field for API key but no field to enter URL and port. In my case, OpenAI API backend is running at http://127.0.0.1:5000.
Ollama backend is not useful in my case, because it is using GGUF which runs at half the speed compared to EXL2 last time I checked, and requires downloading hundreds of gigabytes of GGUF models (for example, Mistral Large 2 alone has almost 100GB size), even though I already have them available in EXL2 format.
Besides TabbyAPI, there are many other projects that run OpenAI API, for example text-generation-webui (oobabooga) and many others. Please fix this so it is possible to use Zed locally without ollama.
Environment
The issue happens with any environment
If applicable, add mockups / screenshots to help explain present your vision of the feature
If applicable, attach your Zed.log file to this issue.
Check for existing issues
Describe the bug / provide steps to reproduce it
When I configure OpenAI, I only see a field for API key but no field to enter URL and port. In my case, OpenAI API backend is running at http://127.0.0.1:5000.
Ollama backend is not useful in my case, because it is using GGUF which runs at half the speed compared to EXL2 last time I checked, and requires downloading hundreds of gigabytes of GGUF models (for example, Mistral Large 2 alone has almost 100GB size), even though I already have them available in EXL2 format.
Besides TabbyAPI, there are many other projects that run OpenAI API, for example text-generation-webui (oobabooga) and many others. Please fix this so it is possible to use Zed locally without ollama.
Environment
The issue happens with any environment
If applicable, add mockups / screenshots to help explain present your vision of the feature
If applicable, attach your Zed.log file to this issue.
No response