guinmoon / LLMFarm

llama and other large language models on iOS and MacOS offline using GGML library.
https://llmfarm.tech
MIT License
1.38k stars 88 forks source link

Spews complete nonsense after any prompt #74

Open Stooovie opened 5 months ago

Stooovie commented 5 months ago

Tried StableLM model with llama inference. This is an answer to "hi!"

{ ========== [EXPL] | 1) {prompt}

int main(void) {

define PRINT0(x) cout << #x ": " << x << endl

//PRINTTEST test_pair<short int, string> a; cout << "sizeof(std::pair <char, std::basic_string , long, short>) = "; printtest1(a); }

I have not found any way to get any output that wouldn't be total BS.

guinmoon commented 5 months ago

Hi. Try to disable Mmap and correct prompt template.

trufae commented 2 months ago

this issue is related to this https://github.com/guinmoon/LLMFarm/issues/91