karthink / gptel

A simple LLM client for Emacs
GNU General Public License v3.0
1.03k stars 111 forks source link

Help using local backends #304

Closed Ypot closed 1 month ago

Ypot commented 1 month ago

Hi

I am trying for the first time running local IA. I have tried first Ollama: but while I am able to use it in the CMD console in Windows, I have no feedback when trying to use Ollama with Gptel, unless I turn off Ollama, then I receive error messages in emacs.

Then I have tried GPT4All. Llama 3 works in GPT4All interface. ~With Gptel it seems to run, but then it asks for a username and a password, which I don't know, so I can't use it either. Is it necessary username and password to use it? error in process filter: json-read: End of file while parsing JSON error in process filter: End of file while parsing JSON~ ~After turning on API in GPT4ALL interface, I receive this message in emacs: error in process sentinel: Wrong type argument: integer-or-marker-p, nil [6 times]~ Working after restarting Windows!!!!

Settings:

(gptel-make-ollama "Ollama"             ;Any name of your choosing
  :host "localhost:11434"               ;Where it's running
  :stream t                             ;Stream responses
  :models '("mistral"))          ;List of models

(gptel-make-gpt4all "GPT4All"           ;Name of your choosing
 :protocol "http"
 :host "localhost:4891"                 ;Where it's running
 :models '("Meta-Llama-3-8B-Instruct.Q4_0.gguf")) ;Available models
karthink commented 1 month ago

Are both Ollama and GPT4All working? Also, are you on the latest version of gptel?

Ypot commented 1 month ago

Updated gptel and both working. Ollama out of the box.

Thanks for your great work :-D