JudiniLabs / code-gpt-docs

Docusaurus page
https://code-gpt-docs.vercel.app
MIT License
582 stars 63 forks source link

"Ollama is not running" but it is running #306

Open jyung-hk opened 4 months ago

jyung-hk commented 4 months ago

I am running Ollama Windows version and CodeGPT can see the models in Ollama. However, it is still showing "Ollama is not running" when I try to fire some commands. Any Suggestion? BTW, I have WSL enabled too, not sure if it is related. Thanks.

jyung-hk commented 4 months ago

Hello Antonio, Thanks for reaching out. Not sure how “Ollama is not installed” was concluded. Open webui, AnythingLLM are all running fine with the installed Ollama instances, just not CodeGPT. In fact, when I select the model in CodeGPT, I can see there is a log generated in Ollama log.

[image: image.png]

Sent from my iPhone

On 13 Jul 2024, at 1:30 AM, Antonio Muñoz @.***> wrote:



Hello @jyung-hk https://github.com/jyung-hk , thank you for reaching out to us at CodeGPT. Analyzing your situation, it seems that you do not have Ollama installed on your computer. I recommend following our step-by-step guide in the Ollama documentation to resolve this issue: https://docs.codegpt.co/docs/tutorial-ai-providers/ollama. If any doubts or queries persist, please do not hesitate to contact us again. Is there anything else I can assist you with today?

— Reply to this email directly, view it on GitHub https://github.com/JudiniLabs/code-gpt-docs/issues/306#issuecomment-2226017662, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEX77PBGRRBYRZBDKGPBHB3ZMAHFDAVCNFSM6AAAAABKTIBZTCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMRWGAYTONRWGI . You are receiving this because you were mentioned.Message ID: @.***>

jsanchezba commented 4 months ago

That happened to me also. But was a while ago. Last week tried again, updated Ollama to the latest version and it just worked. Do you have the latests version of Ollama? Did you install llama3-8b model from the CLI?

RFranquetF commented 3 months ago

This has happened to me as well. It seems to be a timeout problem. If you use a large model and don't have a good GPU, it takes much longer to respond but CodeGPT gives this message for timeout. For example, If I use Llama3.1:8b I have no problems either in the shell window directly with ollama or with codegpt. If I use Llama3.1:70b I have no problems in the shell window directly with ollama, although the response takes longer, but codegpt gives this error message.