Open cchance27 opened 7 months ago
Yes, there is a token limit for prompts and similarly to sample sizes, it cannot be changed easily. If the prompt is too long, it simply won't work. I think in Apple's implementation of SD pipeline, the input is trunkated if it's too long, and a warning message is displayed in the logs. I was planning to do something similar, but I keep forgetting about it. And in this case, a warning can be easily missed by the user, so I think I'll stick to raising an error, but with more helpful message.
doesn't this technically exist in standard SD as well, but apps like a1111 use token chunking to batch 75 tokens at a time? not possible to do a similar form of batching on the coreml side?
a workaround that seems to work though I don't think its how A111 does it, is I can manually split out the longer prompt into multiple embed nodes, and then use conditioning combines on them to end up at a final one.
Updated the name because i realized whats causing this, i'm trying to recreate a workflow and the negative prompt is very long, and it seems that this totally destroys things, it seems like the ksampler is limited to 77 tokens
Workflow: easywf.json I ran updates for comfy and CMLS
But i'm getting the following error