Closed Chainfire closed 1 year ago
Essentially, for any of the fine-tuning data with a custom system prompt, it was not adding it, nor was it prefixing the instructions with USER:. It still worked moderately well, surprisingly, but the training code has been fixed.
I have occasionally noticed blank responses as well, not entirely sure what the cause is yet.
The new versions are here:
Be sure to update your prompt formatting to use newlines (if you are using the standard USER/ASSISTANT style):
A chat.
USER: {prompt}
ASSISTANT:
No newline or space after "ASSISTANT:"
Thanks. Not sure how I both searched and missed this apparently multiple times today. Will try again with the new files and verify my prompt.
Thanks, 2.2 seems to work significantly better. I've converted/uploaded it in EXL2 format too - https://huggingface.co/JorritJ/spicyboros-c34b-2.2-4.0bpw-h6-exl2 .
It still sometimes refuses to give anything but a blank answer. If anyone ever figures out why that is, I'd be happy to learn.
As noted on HuggingFace:
This model is a bit broken due to a prompt formatting bug in the training code! 2.2 will be available soon and should fix this
Can you describe what is wrong with the prompting? I'm having various issues but not sure if they are related. Most prominently, the model often just stops responding for a while in chats (repeated prompts).
Also, is there an ETA on 2.2 for this variant? Days? Weeks? Months? Up in the air?