Closed txstc55 closed 1 year ago
It looks like there could be stop
words missing from the default llama2-uncensored modelfile, these tell the LLM when to stop generating more text.
As a workaround until this gets fixed you can create your own llama2-uncensored Modelfile with the correct stop words. Here is how you do that:
FROM llama2-uncensored:latest
TEMPLATE """### HUMAN:
{{ .Prompt }}
""" PARAMETER stop "### Input:" PARAMETER stop "### Response:" PARAMETER stop "### human"
2. Load the custom model into Ollama via the CLI:
$ ollama create llama2-uncensored:custom -f path/to/Modelfile
3. Now you can run it and the generation should stop when stop patterns are detected.
$ ollama run llama2-uncensored:custom
hello Hello back!
I'm using uncensored model, the issue happened with uncensored-latest, uncensored 70b and any other uncensored model. Sometimes when I prompt the model, after it made a response, it will prompt itself with something like:
This happens randomly and sometimes the ### Input tag becomes ### human tag Any idea why this happens?