Open amd-123 opened 2 months ago
Yes, it currently verifies if the Ollama endpoint is accessible on your PC, which by default uses port 11434.
You could forward port11434 from your PC with the powerful GPU to the other one, that should works.
Same question for me, but I'm using LM Studio, hosted on a second machine, running in server mode. I would be nice if the LM Studio (and Ollama) connectors could be configured to query a remote host.
Yes, please add the option for custom ollama url.
Hello, at the moment we do not have an option to customize the Ollama URL. We will communicate this to the team for discussion.
See https://github.com/davila7/code-gpt-docs/issues/227#issuecomment-2142803287 for a solution using a simple python3 aiohttp proxy.
The PC I run VS-code on does not run my LLM. The LLM runs on a separate PC which has a much more powerful GPU.
At the moment I'm using Ollama and it's in server-mode and I connect to it remotely to send queries and receive responses.
How do I configure Code-GPT in this case to use Ollama remotely? When using local LLM's it seems it is only setup for cases where it's running on the same PC, unless I'm missing something?