Closed swuecho closed 10 months ago
yeah, I'm hosting Ollama on a powerful PC (not on my notebook) but I'm not able to connect Code GPT to it.
Is there a workaround until this is natively supported?
Is there a workaround until this is natively supported?
You could proxy the requests from localhost:11434 to your custom host/port.
A dirty workaround: patch extension.js
after extension installation. Replace the http://localhost:11434
occurrences with the REST API URL of your compute machine.
The solution proposed by @jpatters is also a nice workaround since it is easy to set up with SSH port forwarding. It even provides a layer of transparent security.
yeah, I'm hosting Ollama on a powerful PC (not on my notebook) but I'm not able to connect Code GPT to it.
Is there a workaround until this is natively supported?
You can use iptables or sth similiar to port forwarding your localhost:11434 to remote host port.
I meet a more harder situation... My remote servers is on k8s, I connect to my server with ingress as proxy. Ingress use the hostname in HTTP requests to identify which backend it should proxy to, but the HTTP requests are made by this extension, it must be 127.0.0.1... I eventually had to set up a http reverse proxy server (mitmproxy with a special option to support streaming chunked encoding..., nginx may also work)
Just add my desire to have this customizable, I run ollama as a rest service on a big server machine on my network.
@thawkins
In the latest version 3.1.1 you can now use Customizable baseURL
I tried configuring the API key and Proxy url. but not sure how and wehere to populate the models.
And if I dont select the models. I see an error.
None of these options work: Extension.js or custom. Use the extension "Continue" instead of this junk. That one works easily with a remotelly installed Ollama.
working on this!