Closed 0xThresh closed 2 months ago
I thought about this today aswell. And i stumbled across this comment about using the Open WebUI API.
Looks like all you have to do ist to add "Content-Type": "application/json"
to your Headers.
I tested it right now, seems to work fine, even with autodetect and tabAutocomplete
Full config tested:
{
"models": [
{
"model": "AUTODETECT",
"title": "Ollama",
"completionOptions": {},
"apiBase": "http://127.0.0.1:3000/ollama",
"contextLength": 4000,
"provider": "ollama",
"requestOptions": {
"headers": {
"Authorization": "Bearer sk-2ffe07...",
"Content-Type": "application/json"
}
}
}
],
"customCommands": [
{
"name": "test",
"prompt": "{{{ input }}}\n\nWrite a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don't edit any file.",
"description": "Write unit tests for highlighted code"
}
],
"tabAutocompleteModel": {
"title": "Autocomplete",
"provider": "ollama",
"model": "starcoder2:7b"
},
"allowAnonymousTelemetry": false
}
I think this should be added to the docs :). Have fun!
@vidschofelix Thanks, works fine, my config:
{
"title": "Open WebUi Deepseek-Coder:6.7b",
"model": "deepseek-coder:6.7b",
"provider": "ollama",
"apiBase": "https://ai.xxx.com/ollama",
"requestOptions": {
"headers": {
"Authorization": "Bearer sk-xxx",
"Content-Type": "application/json"
}
}
}
Thanks @vidschofelix, that config worked for me as well - all I was missing was that pesky Content-Type
block 😅
I agree that this would be good to add to the docs if there isn't a desire to add a whole new provider for Open WebUI.
The Content-Type header is now added by default in the requests so I think that this issue is safe to close: https://github.com/continuedev/continue/commit/aa37f8cbde68ffe0905dca3a530c03940784472f
Validations
Problem
Open WebUI is becoming a very popular method to host Ollama models remotely. It adds a ChatGPT-like frontend to your Ollama deployment, but more importantly, adds capabilities like API authentication, user management, RAG, and many other great features. There is a huge gap specifically in the JetBrains ecosystem for supporting Ollama workloads on Open WebUI, and I'd love to see Continue close the gap.
Currently, when you attempt to configure Continue to use Open WebUI as a remote Ollama provider, you end up with a number of errors, such as:
"POST /ollama/api/show HTTP/1.1" 422 Unprocessable Entity
"POST /ollama/v1/chat/completions/api/show HTTP/1.1" 500 Internal Server Error
My current
config.json
file that hits these errors is shown below:Solution
I would love for Continue to implement an Open WebUI provider to allow the plugin to communicate with remote Ollama installations that use Open WebUI as a proxy. Details about the route used by Open WebUI for proxying to Ollama can be found in their docs here: https://docs.openwebui.com/troubleshooting/
Perhaps the
config.json
block for an Open WebUI provider could look something like this:The
apiKey
would map to the individual users' API key that they've generated for themselves in the Open WebUI frontend, which forces users to sign up and get approved before they're able to access Ollama through Open WebUI.Thanks for your consideration!