guinmoon / LLMFarm

llama and other large language models on iOS and MacOS offline using GGML library.
https://llmfarm.site
MIT License
1.05k stars 62 forks source link

Spews complete nonsense after any prompt #74

Open Stooovie opened 2 weeks ago

Stooovie commented 2 weeks ago

Tried StableLM model with llama inference. This is an answer to "hi!"

{ ========== [EXPL] | 1) {prompt}

int main(void) {

define PRINT0(x) cout << #x ": " << x << endl

//PRINTTEST test_pair<short int, string> a; cout << "sizeof(std::pair <char, std::basic_string , long, short>) = "; printtest1(a); }

I have not found any way to get any output that wouldn't be total BS.

guinmoon commented 1 week ago

Hi. Try to disable Mmap and correct prompt template.