continuedev / continue

⏩ Continue is the leading open-source AI code assistant. You can connect any models and any context to build custom autocomplete and chat experiences inside VS Code and JetBrains
https://docs.continue.dev/
Apache License 2.0
18.32k stars 1.49k forks source link

Cannot connect on Ollama server running in the same network #2358

Open AxelFooley opened 3 weeks ago

AxelFooley commented 3 weeks ago

Before submitting your bug report

Relevant environment info

- OS:MacOS Sonoma
- Continue: Pre-release
- IDE: VSCode
- Model: Qwen2.5-Coder-7B-Instruct
- config.json:

{
  "models": [
    {
     "title": "Qwen2.5-Coder-7B-Instruct-Q6_K_L",
      "provider": "ollama",
      "model": "Qwen2.5-Coder-7B-Instruct-Q6_K:latest",
      "apiBase": "http://10.0.110.139:11434"
    }
  ],
  "tabAutocompleteModel": {
      "title": "Qwen2.5-Coder-7B-Instruct-Q6_K_L",
      "provider": "ollama",
      "model": "Qwen2.5-Coder-7B-Instruct-Q6_K:latest",
      "apiBase": "http://10.0.110.139:11434"
  },
  "tabAutocompleteOptions": {
    "template": "<|fim_prefix|>{{{ prefix }}}<|fim_suffix|>{{{ suffix }}}<|fim_middle|>"
  }
}

Description

I am running vscode and continue on my Macbook, my models though are being server by a remote server running in my LAN, that you can see defined as apiBase in the config.json above.

The problem is that i am having this error:

Error: request to http://10.0.110.139:11434/api/chat failed, reason: connect EHOSTUNREACH 10.0.110.139:11434 - Local (10.0.110.155:55113)

I believe the issue is with the path, which is giving a 404

~ curl -nL http://10.0.110.139:11434/api/chat 404 page not found

even though the error says Host Unreachable, but the host is very well reachable instead.

To reproduce

No response

Log output

No response

Patrick-Erichsen commented 2 weeks ago

Hi @AxelFooley , strange that you're getting the EHOSTUNREACH error. I don't have any great troubleshooting advice unfortunately, but I'd maybe start off just by verifying you can ping the server: ping 10.0.110.139.

Can you confirm that the chat model does support the /api/chat endpoint?