Closed hobolyra closed 3 weeks ago
What operating system are you on, and are you using the bundled version or running from source?
Also, what auto-captioner settings are you using?
What operating system are you on, and are you using the bundled version or running from source?
Also, what auto-captioner settings are you using?
Windows 10, the bundled version, settings: Use GPU, load in 4bit, insert after tags, a few things in the "discourage" prompt, as well as a general prompt, rest default. I have tried changing tokens up and down, as well as a few others, removing any prompts or discouraged words, etc. Still get the same error each time.
It seems to be caused by an empty string in "Discourage from caption". This can happen if there is a trailing comma, for example.
I will fix it so that empty strings are removed before being sent to the model.
In the meantime, make sure that each comma is followed by some text.
The Auto-captioner will load the shards, then throw out this error as soon as it tries to start:
Using Xtuner/llava-llama-3-8b-v1-1-transformers I can run up to 30b 4bit locally through other means, with 24gb vram, so I do know the system/python requirements are there locally. Just something with the transformers in this program that it doesn't like?
WD works, Tried with another llm one (Cogvlm-chat) and got the same error word for word.