karthink / gptel

A simple LLM client for Emacs
GNU General Public License v3.0
1.04k stars 111 forks source link

(ollama) Response Error: nil #270

Closed anonimitoraf closed 3 months ago

anonimitoraf commented 3 months ago

Hi @karthink, I was using this package with ollama just fine until I had to upgrade my ollama binary to the newest version (v0.1.30, to fix some error)

Now I'm getting Response Error: nil, Ollama error (nil): Malformed JSON in response. image

Running ollama in my terminal works fine

image

Here's my config:

  (setq gptel-model "mistral"
        gptel-log-level :debug
        gptel-backend (gptel-make-ollama "Ollama"
                        ;; Resolved via tailscale
                        :host "desktop:11434"
                        :stream t
                        :models '("mistral"
                                  "mixtral"
                                  "phind-codellama"
                                  "codellama"))

My gptel--known-backends is:

(("Ollama" . #s(gptel-ollama "Ollama" "desktop:11434" nil "http" t "/api/generate" nil
                             ("mistral" "mixtral" "phind-codellama" "codellama")
                             "http://desktop:11434/api/generate" nil))
 ("ChatGPT" . #s(gptel-openai "ChatGPT" "api.openai.com" #<subr F616e6f6e796d6f75732d6c616d626461_anonymous_lambda_11> "https" t "/v1/chat/completions" gptel-api-key
                              ("gpt-3.5-turbo" "gpt-3.5-turbo-16k" "gpt-4" "gpt-4-turbo-preview" "gpt-4-32k" "gpt-4-1106-preview" "gpt-4-0125-preview")
                              "https://api.openai.com/v1/chat/completions" nil)))

Here's all I see in the *gpt-log* file:

{
  "gptel": "request body",
  "timestamp": "2024-03-30 17:51:51"
}
{
  "model": "mistral",
  "system": "You are a large language model living in Emacs and a helpful assistant. Respond concisely.",
  "prompt": "Hi",
  "stream": true
}

Let me know if I can provide more helpful info. Thanks!

karthink commented 3 months ago

My first thought was that perhaps the Ollama API changed, but there's no mention of that in the release notes. Could you try the following?

  1. Run (setq gptel-log-level 'debug)
  2. Use Ollama
  3. Check the log buffer -- there should be a curl command you can copy and paste into the terminal.

Try running just the Curl command and let me know what the output is? Nothing has changed on gptel's side as far as Ollama is concerned, so I think it's most likely a connection issue.

karthink commented 3 months ago

Also thanks for noticing the issue naming convention and prefixing the title with "(ollama)"!

jwr commented 3 months ago

As a data point, I am running ollama 0.1.30 on Mac OS 14.3.1 (installed via homebrew), with various models and it generally works with gptel (e.g. I do not get the showstopper error shown above).

anonimitoraf commented 3 months ago

Oh man, I realized that my ollama instance was running on the localhost (127.0.0.1) interface but I was trying to access it via a different one. :facepalm: