Acly / krita-ai-diffusion

Streamlined interface for generating images with AI in Krita. Inpaint and outpaint with optional text prompt, no tweaking required.
https://www.interstice.cloud
GNU General Public License v3.0
6.3k stars 301 forks source link

ERROR - LIVE - Server execution error: Error while deserializing header: HeaderTooLarge #61

Closed makeitrad closed 9 months ago

makeitrad commented 9 months ago

I see the following error in red letters when I hit the play button in the live view. I got through installing all nodes and models just fine. The app will generate images and connect to my local ComfyUI install just fine.

Server execution error: Error while deserializing header: HeaderTooLarge

From the ComfyUI console I see this:

`Starting server

To see the GUI go to: http://0.0.0.0:8188 got prompt model_type EPS adm 0 Using xformers attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using xformers attention in VAE missing {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'} ERROR:root:!!! Exception during processing !!! ERROR:root:Traceback (most recent call last): File "/home/zvi/ComfyUI/execution.py", line 153, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "/home/zvi/ComfyUI/execution.py", line 83, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "/home/zvi/ComfyUI/execution.py", line 76, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "/home/zvi/ComfyUI/nodes.py", line 569, in load_lora lora = comfy.utils.load_torch_file(lora_path, safe_load=True) File "/home/zvi/ComfyUI/comfy/utils.py", line 13, in load_torch_file sd = safetensors.torch.load_file(ckpt, device=device.type) File "/home/zvi/miniconda3/envs/comfy3/lib/python3.10/site-packages/safetensors/torch.py", line 308, in load_file with safe_open(filename, framework="pt", device=device) as f: safetensors_rust.SafetensorError: Error while deserializing header: HeaderTooLarge

Prompt executed in 0.58 seconds `

Any help greatly appreciated!

This is running UBUNTU 20.04 LTS, and Yesterdays build of ComfyUI. I tried from both Linux and Mac versions of Krita.

makeitrad commented 9 months ago

Just reinstalled all the models to make sure I didn't assume I had the right ones previously. Still seeing the same error.

Acly commented 9 months ago

I usually get this error when a model file is corrupt because it wasn't downloaded properly.

From the stacktrace we can see it happens when trying to load a Lora. Assuming you don't have any Lora configured in your style, it can only be the LCM Lora. Could you check the sha256? Hugging face provides it here (for SD1.5): https://huggingface.co/latent-consistency/lcm-lora-sdv1-5/blob/main/pytorch_lora_weights.safetensors I think on Ubuntu you can run sha256sum on the file and then compare. It should match exactly.

If it's corrupt try re-download. If not... I don't have an idea at the moment, but we'll see

makeitrad commented 9 months ago

So the models Ive downloaded a few times were good to go. What appears to be the culprit is I had other LCM models in the Lora folder as well, and I had an old LCM Custom node. I removed them both from the equation and now my daughter is in love. Hopefully I get a chance to play soon :)

One way to avoid this may be to have only one LCM folder location that the plug in will read from? Just a thought though Im sure you'll have better ideas than me.

Thank you for the help! IMG_9002

Acly commented 9 months ago

Love it :)

Detecting which Lora file to use turns out to be quite tricky. The original model has a very generic filename, and I thought I'd allow a bit of freedom in naming. But it looks like it picked up the wrong file in your case (similar name?). Unfortunately filenpath is the only way to identify them, but I can be more strict about it to avoid mix-ups.

makeitrad commented 9 months ago

My other LCM Loras were in a the same lora folder however they were in a folder called LCM instead of the root. It still dug in there and found the bad ones...

What would make the most since to me would to have a designated directory for Krita LCMs. Something like this: ComfyUI/models/loras/kreta/yourlcmmodelhere.safetensors.

Well probably have a lot more of them soon and it will get even tougher then...

Thank again for your help on a Saturday! This is great toolset you have here and I love that its running local!

Acly commented 9 months ago

Latest version is now more strict. I took your suggestion and made it prioritize models that are found in a "krita" folder if there are ambiguities. It's optional so people aren't forced to duplicate models.

makeitrad commented 9 months ago

Sounds awesome I'll give the update a try today! You also fixed the txt prompt from disappearing! Noticed this one too! Excited to get some real time on this today. Happy Sunday and Thanks 🤗