Open enzyme69 opened 1 year ago
I use this prompt template:
And using your prompt it responds like this:
Additionally, gpt4all does not currently set the rhl_eps param of llama.cpp to the correct value for llama2.
@niansa rhl_eps? You mean the eps value in RMSNorm layer?
Issue you'd like to raise.
I am using Llama 2 model that I got from here: https://gist.github.com/adrienbrault/b76631c56c736def9bc1bc2167b5d129
When running using above command, the Llama model seems to be smarter.
Compared to the one I am running via gpt4all interface. I wonder why? Is there a setting or temperature I need to adjust manually?
Suggestion:
No response