Closed roblem closed 6 months ago
(use-package gptel)
(setq-default gptel-backend
(gptel-make-ollama
"Ollama" ;Any name of your choosing
:host "localhost:11434" ;Where it's running
:models '("mistral:latest") ;Installed models
:stream t) ;Stream responses
gptel-model "mistral:latest")
That works. Thanks.
This occurs with local instances of both llamma-cpp and ollama, so perhaps I am missing something obvious. My ollama setup:
And this is the results when I
gptel-send
:From a bash prompt,
gives expected results.