Open pattontim opened 6 months ago
Same issue using the dev branch.
It crashes on this call:
const newSessionId = await window.llm.createSession(
"some_unique_session_id"
);
This looks like potentially an issue with node-llama-cpp stable and beta, since if I attempt to load a chat using node-llama-cpp on the command line it loads the model successfully but exits without any fail reason on a fresh install.
Yes unfortunately we've been having several problems with node llama cpp and have moved entirely towards using Ollama: #135. This should give us much better stability and support across different machines. We will merge and release in the next day or so!
@pattontim the main branch now fully uses Ollama to run local models!
It'd be great if you could test it and let me know whether it works for you.
In Settings->LLM->Add New Local LLM you will need to paste in the model name from the Ollama library you want to use and Reor will download and attach it for you.
Describe the bug When I try to generate a response using a local model the app immediately crashes without error message.
Terminal:
To Reproduce Steps to reproduce the behavior:
Expected behavior The chat completion finishes with response.
Screenshots
Desktop (please complete the following information):
Additional context Attempted using mixtral 26GB model but issue also presents using the smallest, openhermes-2.5-mistral-7b.Q2_K.gguf (3 GB) Issue presents even if GPU is disabled.