ItsPi3141 / alpaca-electron

The simplest way to run Alpaca (and other LLaMA-based local LLMs) on your own computer
MIT License
1.29k stars 144 forks source link

GGML v3 support #87

Open Steelman14aUA opened 1 year ago

Steelman14aUA commented 1 year ago

New GGML v3 is not supported, please add.

niizam commented 1 year ago

Just replace chat.exe with main.exe from llama-cpp binary release in Alpaca-Electron installation folder, The exact location is in the C:\Users\username\AppData\Local\Programs\alpaca-electron\resources\app\bin\ image

Steelman14aUA commented 1 year ago

Replaced chat.exe to main.exe (renamed and replaced), tried avx,avx2,avx512. It loads the model but gives no response, and there is no cpu load..

niizam commented 1 year ago

Replaced chat.exe to main.exe (renamed and replaced), tried avx,avx2,avx512. It loads the model but gives no response, and there is no cpu load..

I guess you should try to use OpenBLAS for CPU inference Also don't forget to extract the OpenBLAS.dll