Closed Nocturna22 closed 1 year ago
@ItsPi3141 Same here with the new version. I downloaded 1.05 and the new 7B model ggml-model-q4_1 and nothing loads. The CPU gauge sits at around 13% and the RAM at 7.7GB/23.9GB. The old (first version) still works perfectly btw.
@ItsPi3141 Same here with the new version. I downloaded 1.05 and the new 7B model ggml-model-q4_1 and nothing loads. The CPU gauge sits at around 13% and the RAM at 7.7GB/23.9GB. The old (first version) still works perfectly btw.
The new version takes slightly longer to load into RAM the first time. Make sure it's on an SSD and give it about two or three minutes.
@ItsPi3141 thanks. Actually it's now saying it can't load the model from startup, and displays the modal to select the model and directory. I'm now downloading other models from your updated link to see if that helps.
@ItsPi3141 OK downloaded other models, and it looks like for me the 4_1 models don't work, but the 4_0 models do. And faster than the first version too. Nice! I wish I knew what the difference was between Native, Lora etc in model descriptions. :)
@ItsPi3141 just tested the 13B model (again 4_0) and it also works. Much much slower to start obviously, and it looks like it's a bit more delusional. This is the result when I asked it what a 'weasel' is.
"A "weasal" refers to an ancient way of measuring distance traveled by pack animals or humans. One would tie on end around the waist and hold one hand at shoulder height, then let go once they reached their destination point (the other far side). The number of times you had gone round without having dropped your "weasal" was a direct reflection for how much distance traveled since starting out that day."
Huh?? :))
Might just be a bad seed lol. Mine is pretty good.
Might just be a bad seed lol. Mine is pretty good.
So just repost the question?
Yeah I guess so
I found the fix!
Since Alpaca Electron didn't work I tried llama.cpp (which it is based on). When I ran llama.cpp I got an error saying "vcruntime140_1.dll" was missing. The fix is apparently to install https://aka.ms/vs/17/release/vc_redist.x64.exe (Microsoft C++ runtime)
After installing that Alpaca Electron successfully loaded models into RAM and started producing output (and llama.cpp worked as well).
@ItsPi3141 The Microsoft C++ runtime is a dependency. Probably should put this in a FAQ under "Why doesn't the model load?"
This is not the first way to install Alpaca that I have tried. Before that I tried Dalai and I was supposed to install Visual Studio 2022 with Desktopdevelopement C++. I think that this redist was included in the package. Therefore this fix does not apply to me. And I thought that Electron also installs this redis... But obviously not :P I can't check this anymore, because I don't plan to install windows on the device again... (For now)
I found the fix!
Since Alpaca Electron didn't work I tried llama.cpp (which it is based on). When I ran llama.cpp I got an error saying "vcruntime140_1.dll" was missing. The fix is apparently to install https://aka.ms/vs/17/release/vc_redist.x64.exe (Microsoft C++ runtime)
Thanks for letting me know. I thought every Windows computer had that preinstalled. I will update the FAQ. :)
Are there Debug logs? Electron isnt even loading the model into RAM. I got 12GB. But i dont know if my CPU even supports AVX... The model is the 7B. Im on windows11. Electron is loading and loading, but nothing is happening.