ohmplatform / FreedomGPT

This codebase is for a React and Electron-based app that executes the FreedomGPT LLM locally (offline and private) on Mac and Windows using a chat-based interface
http://www.freedomgpt.com
GNU General Public License v3.0
2.6k stars 354 forks source link

"yarn start" fails to spawn "llama.cpp/main" on Linux #108

Closed Mnemotechnician closed 1 year ago

Mnemotechnician commented 1 year ago

Hi, I use Manjaro Linux and I decided to install freedomgpt on it. I did the normal things that were specified in the readme and the app seemed to work correctly at first.

Terminal log ahead ```sh git clone --recursive --depth 1 https://github.com/ohmplatform/FreedomGPT.git Freedomgpt cd Freedomgpt yarn install yarn start:prod ```

However, when I tried to start the app and load a model, it failed without any obvious signs. With yarn start:prod I got only a single line in the terminal message saying "Child process exited with code 126 and signal null". When I tried to run yarn start, however, I got a more detailed log saying that the app tries to run ./llama.cpp/main, which does not exist.

Error log ``` Server listening on port 8889 update-electron-app config looks good; aborting updates since app is in development mode Failed to start child process: Error: spawn /home/fox/Apps/Freedomgpt/llama.cpp/main ENOENT at ChildProcess._handle.onexit (node:internal/child_process:283:19) at onErrorNT (node:internal/child_process:476:16) at process.processTicksAndRejections (node:internal/process/task_queues:82:21) { errno: -2, code: 'ENOENT', syscall: 'spawn /home/fox/Apps/Freedomgpt/llama.cpp/main', path: '/home/fox/Apps/Freedomgpt/llama.cpp/main', spawnargs: [ '-m', '/home/fox/Documents/ml/ggml-model-q4_0.bin', '-ins', '--ctx_size', '2048', '-n', '-1', '-ins', '-b', '256', '--top_k', '10000', '--temp', '0.2', '--repeat_penalty', '1', '-t', '7' ] } ```

However, I was able to find a main executable in the folder ./llama.cpp/bin/, which turned out to be the exact executable the app was searching for. After I created a symlink to it using the command ln -s $(pwd)/llama.cpp/bin/main llama.cpp/, freedomgpt was able to load a model successfully and worked correctly after yet another yarn start.

I'm unfamiliar with this codebase and have no idea why that happened, so I'm just leaving this explanation here in a hope that it will get fixed or accounted for.

mikerossiter commented 1 year ago

I had this and I just did what it says in the Readme in this order: git clone --recursive https://github.com/ohmplatform/FreedomGPT.git freedom-gpt cd freedom-gpt yarn install

then: cd llama.cpp make

then back to root: cd .. yarn start

Try that. GitHub version of turning it on and off again.

Mnemotechnician commented 1 year ago

Interesting, maybe it should've been made clear in the readme that you should cd llama.cpp && make && cd .. before running yarn... I'll close the issue in that case.