nomic-ai / gpt4all

GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.
https://nomic.ai/gpt4all
MIT License
70.77k stars 7.71k forks source link

bartowski Reflection Llama 3.1 70b creates just weird characters as output on M4 Pro 48GB #3190

Open incredibleole opened 1 week ago

incredibleole commented 1 week ago

When using the 70b Llama-Model it just generates Garbage random Characters as output, this seems to be the case with other 70b-Models as well. Using macbook pro M4 Pro with 48gb of RAM.

Your Environment

AndriyMulyar commented 1 week ago

What's the tok/s you get on the 48GB M4?

It is likely a prompt template issue for gguf issue, we will investigate

incredibleole commented 11 hours ago

I get 9-10 tokens/sec. I also tried thebloke-kafkaLM-70b, this doesn't output anything at all and hangs an 1 Token/sec