Open karthink opened 4 months ago
The response from Ollama is empty.
Could you run (setq gptel-log-level 'debug)
, try to use Ollama and paste the contents of the *gptel-log*
buffer? Please wait until either an error or a timeout.
I got the following:
On gptel-log:
{
"gptel": "request Curl command",
"timestamp": "2024-02-11 13:06:13"
}
[
"curl",
"--disable",
"--location",
"--silent",
"--compressed",
"-XPOST",
"-y300",
"-Y1",
"-D-",
"-w(5242174a9fcb32555dea3157193c24d7 . %{size_header})",
"-d{\"model\":\"mistral\",\"system\":\"You are a large language model living in Emacs and a helpful assistant. Respond concisely.\",\"prompt\":\"Generate while loop in rust.\",\"stream\":true}",
"-HContent-Type: application/json",
"http://localhost:11434/api/generate"
]
It seems the problem may stem from Ollama itself. I attempted to execute the following command:
curl -X POST -d "{\"model\":\"mistral\",\"system\":\"You are a large language model living in Emacs and a helpful assistant. Respond concisely.\",\"prompt\":\"Generate while loop in rust.\",\"stream\":true}" -H "Content-Type: application/json" "http://localhost:11434/api/generate"
in the shell, but it ends up hanging for hours without any response.
Has Ollama ever worked for you on this machine?
I get the the error when I run
gptel-send
with the following configurations. It also takes 10 minutes before the response arrives. I also gotResponse Error: nil
sometimes.{ "gptel": "request body", "timestamp": "2024-02-11 11:13:45" } { "model": "mistral", "system": "You are a large language model living in Emacs and a helpful assistant. Respond concisely.", "prompt": "Test", "stream": true }