ex3ndr / llama-coder

Replace Copilot local AI
https://marketplace.visualstudio.com/items?itemName=ex3ndr.llama-coder
MIT License
1.37k stars 95 forks source link

Error during inference: fetch failed #29

Open cadeff01 opened 5 months ago

cadeff01 commented 5 months ago

I have an ollama container running the stable-code:3b-code-q4_0 model. I'm able to interact with the model via curl:

curl -d '{"model":"stable-code:3b-code-q4_0", "prompt": "c++"}' https://notarealurl.io/api/generate

and get a response in a terminal in wsl where I'm running vscode:

image

However when I set the Ollama Server Endpoint to https://notarealurl.io/ I just get [warning] Error during inference: fetch failed

cadeff01 commented 5 months ago

my url is using a custom ca but I also have NODE_EXTRA_CA_CERTS=/etc/ssl/certs/ca-certificates.crt set which from my understanding should address any ssl issues with my custom ca.

cadeff01 commented 5 months ago

Got some tests running locally and verified that this is related to the custom ca. Any chance of getting support for custom ca certs?

itamark-targa commented 5 months ago

I get the same warning in VSCode: https://github.com/ex3ndr/llama-coder/issues/3#issuecomment-1917842038 Appreciate any help with it

Kevsnz commented 4 months ago

VSCode as a host controls all connections extensions open and use, so it's not related to Llama Coder specifically.

Have you tried solution from this Stackoverflow question?

cadeff01 commented 4 months ago

I've tried the NODE_EXTRA_CERTS solution as disabling SSL is a really bad idea but that didn't help. For similar plugins like Continue I know they had to add something to support extra certs in the plugin itself for this to work.