Closed piovis2023 closed 2 months ago
In the scene prompt, each character's prompt needs to appear separately at least twice ,in this issues ,Rose needs to appear twice in the scene_ prompts .
I have noticed another error in your workflow. If you want to use "encode-repo", you need to change your address from C:\xxx\xxx to C:/xxx/xxx. If you have already downloaded using the preset huggingface_hub, this address can be kept as the default "laion/CLIP ViT bigG-14 laion2B-39B-b160k". Modifying this here is only for the convenience of those who prefer to use the local model, or whose huggingface_hub cache address is not on the default C: drive
Thanks @smthemex for the very quick reply.
EXCELLENT work and EXCELLENT support.
Here is my corrected workflow (thanks to you):
A few additional queries (I can raise new tickets for each if that is useful to you)
How can we put 2+ characters in the same scene? (I tried to copy your workflow example image). I get the following error: Error occurred when executing Storydiffusion_Sampler: list index out of range
How can we control camera distance? e.g. far away shot
How can we improve the closeness of the character?
Can we include other background characters in the background scene? (e.g a crowd cheering)
Do you have a workflow the includes both text to image and img2img? I'd be curious to have temporary characters in my comic without having to supply all characters. (e.g. a close up of a waiter dropping a glass in front of Rose)
Currently, only two characters are supported in the same image. To enable this feature, (roleA and roleB) is required. The lens perspective function can only be improved by prompt, such as "media shot". If you want to enable certain special perspectives, you may need to introduce a control net reference image. Of course, the control net function is only suitable for scenes where "two characters are in the same image", and the original method for single character scenes was not provided. Perhaps you can try another similar story node I created based on MS diffusion, which supports full scene controlnet. I noticed that my sample images are all older, and I will update the image examples and corresponding workflows soon.
Trying the img2img workflow from the repo with minimal changes.
Can someone please help and educate me? Thank you
Error: Error occurred when executing Storydiffusion_Sampler: exceptions must derive from BaseException ...
raise f"{character_key} not have enough prompt description, need no less than {id_length}, but you give {len(index_list)}"