JudiniLabs / code-gpt-docs

Docusaurus page
https://code-gpt-docs.vercel.app
MIT License
553 stars 58 forks source link

Using local model hosted remotely #258

Open amd-123 opened 2 months ago

amd-123 commented 2 months ago

The PC I run VS-code on does not run my LLM. The LLM runs on a separate PC which has a much more powerful GPU.

At the moment I'm using Ollama and it's in server-mode and I connect to it remotely to send queries and receive responses.

How do I configure Code-GPT in this case to use Ollama remotely? When using local LLM's it seems it is only setup for cases where it's running on the same PC, unless I'm missing something?

Vokturz commented 2 months ago

Yes, it currently verifies if the Ollama endpoint is accessible on your PC, which by default uses port 11434.

You could forward port11434 from your PC with the powerful GPU to the other one, that should works.

lstep commented 2 months ago

Same question for me, but I'm using LM Studio, hosted on a second machine, running in server mode. I would be nice if the LM Studio (and Ollama) connectors could be configured to query a remote host.

defaultsecurity commented 1 month ago

Yes, please add the option for custom ollama url.

Adel242 commented 1 month ago

Hello, at the moment we do not have an option to customize the Ollama URL. We will communicate this to the team for discussion.

pawlakus commented 1 month ago

See https://github.com/davila7/code-gpt-docs/issues/227#issuecomment-2142803287 for a solution using a simple python3 aiohttp proxy.