marcom / Llama.jl

Julia interface to llama.cpp, a C/C++ library for running language models
MIT License
23 stars 2 forks source link

Fix Ctrl-C not working in run_chat, run_llama #11

Closed marcom closed 8 months ago

marcom commented 8 months ago

Fixes #7.

Also only set GGML_METAL_PATH_RESOURCES on Apple machines.

marcom commented 8 months ago

@svilupp: I changed the setting of the GGML_METAL_PATH_RESOURCES to only be done on Apple, could you check if everything still works on Apple (i'm on linux so can't check here).

svilupp commented 8 months ago

The GGML_PATH stuff works on Mac! The file is loaded as expected, happy to merge this change.

However, the SIGINT does NOT work.

Try it on your side with some long-running prompt, eg, "count from 1 to 100". Does it work?

I've played around with disabled_sigint() on different levels, but it still keeps running in the background. When using llama.cpp directly in the terminal it works as expected.

marcom commented 8 months ago

@svilupp : It seemed to work for me in CPU mode on linux, I think I tested more than one thread. I have a deadline coming up in two days, I'll investigate after that.

Edit: Just tested again with one thread as well as multi-threaded, both work on linux.