Closed Edouard-Legoupil closed 10 months ago
Hi,
thanks for filing the issue. It does not really make sense to me at this point. But can you please try the follwing and post the output:
reprex::reprex({
library(rollama)
options("rollama_verbose" = FALSE)
query("why is the sky blue?")
system2("curl", "-V", stdout = TRUE)
}, session_info = TRUE)
FYI: options("rollama_verbose" = FALSE)
gives you better error messages because callr
is not involved anymore (I realise this is confusing, but do not know at the moment how to do it better).
Very odd, I just encountered the same issue. But it has nothing to do with rollama
. You can test that by using:
curl http://localhost:11434/api/generate -d '{
"model": "llama2",
"prompt": "What color is the sky at different times of the day? Respond using JSON",
"format": "json",
"stream": false
}'
In my case this suddenly returns curl: (52) Empty reply from server
.
And if you get an empty reply like me, also try:
docker exec -it ollama ollama run llama2
# or if you run ollama without docker
ollama run llama2
Both fail for me at the moment. Interestingly, other calls to the API, for example, to list available modesl, succeed.
It seems another process was hogging my vRAM. By turning off the GPU, you can force the model to use your system memory (RAM). So this works for me:
query("why is the sky blue?", model_params = list(num_gpu = 0))
This could be solved much better upstream! There are no logs or errors indicating that this is a OOM error.
On # 1 -
library(rollama) options("rollama_verbose" = FALSE) query("why is the sky blue?")
httr2::resp_body_json()
:system2("curl", "-V", stdout = TRUE)
Tried also
FYI - I am using this - https://github.com/docker/genai-stack to launch my ollama
options("rollama_verbose" = FALSE) ping_ollama() ▶ Ollama is running at http://localhost:11434! query("why is the sky blue?", model_params = list(num_gpu = 0)) Error in
httr2::resp_body_json()
: ! Unexpected content type "text/plain". • Expecting type "application/json" or suffix "json". Runrlang::last_trace()
to see where the error occurred.
I read somewhere that Ollama does not work with old versions of curl
(I can't find the source). The one I'm using is almost 4 years newer than yours (curl 8.5.0 (x86_64-pc-linux-gnu), Release-Date: 2023-12-06). I would suggest you update that. Then I would be interested in seeing what this does (in terminal, not R):
curl http://localhost:11434/api/generate -d '{
"model": "llama2",
"prompt": "What color is the sky at different times of the day? Respond using JSON",
"format": "json",
"stream": false
}'
If this also fails after updating curl, this is an Ollama problem that I can't really help with.
yes - had to do a big LTS update...! Thanks! now it works!!
Hello,
First congrat for the documentation! Really extensive and nicely made! Always an engaging sign!
Testing package - installed with ollama running on linux...
When I try any query:
query("why is the sky blue?")
I get this error:
httr2::resp_body_json(httr2::req_perform(httr2::req_error(httr2::req_body_json(httr2::req_url_path_append(httr2::request(server), …
:! Unexpected content type "text/plain".
More below
Backtrace:
callr:::throw(callr_remote_error(remerr, output), parent = fix_msg(remerr[[3]]))
Subprocess backtrace: . `
Any hints?
Thanks! Edouard