simonw / llm-llama-cpp

LLM plugin for running models using llama.cpp
Apache License 2.0
136 stars 19 forks source link

llama.cpp model not staying in memory with llm chat #15

Open m0nac0 opened 1 year ago

m0nac0 commented 1 year ago

When I use llm chat with a llama.cpp model, it generally works good, but is pretty slow because the model seems to be loaded into memory for every response and is unloaded after the response (judging from the memory usage seen in Windows Task Manager).

Is there an option to keep the model in memory with llm chat and llm-llama-cpp?

If someone else experiences this, my current workaround is starting llama-cpp-python in the openai-compatible server mode with python -m llama_cpp.server --model path/to/model, and adding that model in the extra-openai-models.yaml file as described in https://llm.datasette.io/en/stable/other-models.html#openai-compatible-models. When using the model with llm prompt or llm chat I pass -o "max_tokens" 200, because the default number of max_tokens seems to be set at the very low amount of 16 tokens. This is more performant, as the model stays in memory, but probably not ideal since I think the llama2-specific prompt template logic is not used anymore.