JudiniLabs / code-gpt-docs

Docusaurus page
https://code-gpt-docs.vercel.app
MIT License
553 stars 58 forks source link

[feature] when use local llm such as ollama, support custom host and port #208

Closed swuecho closed 5 months ago

davila7 commented 6 months ago

working on this!

qdrop17 commented 6 months ago

yeah, I'm hosting Ollama on a powerful PC (not on my notebook) but I'm not able to connect Code GPT to it.

Is there a workaround until this is natively supported?

jpatters commented 6 months ago

Is there a workaround until this is natively supported?

You could proxy the requests from localhost:11434 to your custom host/port.

mf-zf commented 6 months ago

A dirty workaround: patch extension.js after extension installation. Replace the http://localhost:11434 occurrences with the REST API URL of your compute machine.

The solution proposed by @jpatters is also a nice workaround since it is easy to set up with SSH port forwarding. It even provides a layer of transparent security.

orkutmuratyilmaz commented 6 months ago

199

acmol commented 6 months ago

yeah, I'm hosting Ollama on a powerful PC (not on my notebook) but I'm not able to connect Code GPT to it.

Is there a workaround until this is natively supported?

You can use iptables or sth similiar to port forwarding your localhost:11434 to remote host port.

I meet a more harder situation... My remote servers is on k8s, I connect to my server with ingress as proxy. Ingress use the hostname in HTTP requests to identify which backend it should proxy to, but the HTTP requests are made by this extension, it must be 127.0.0.1... I eventually had to set up a http reverse proxy server (mitmproxy with a special option to support streaming chunked encoding..., nginx may also work)

thawkins commented 6 months ago

Just add my desire to have this customizable, I run ollama as a rest service on a big server machine on my network.

davila7 commented 5 months ago

@thawkins

In the latest version 3.1.1 you can now use Customizable baseURL

Captura de pantalla 2024-01-12 a las 08 25 59
speedyankur commented 5 months ago

I tried configuring the API key and Proxy url. but not sure how and wehere to populate the models.

And if I dont select the models. I see an error.

Screenshot 2024-01-12 at 14 43 45
arduiron commented 2 months ago

None of these options work: Extension.js or custom. Use the extension "Continue" instead of this junk. That one works easily with a remotelly installed Ollama.