TheLastBen / fast-stable-diffusion

fast-stable-diffusion + DreamBooth
MIT License
7.49k stars 1.31k forks source link

Reporting errors in the last steps to launch WebUI #626

Open Moffittk opened 1 year ago

Moffittk commented 1 year ago

When running the Colab for Stable Diffusion on Premium and High RAM, I get the following errors:
FEA8DECD-0A01-4D9C-98CF-6E43DF265DAB and the link to the local web UI does not finish and provide the link.

I have a DB custom model created from Fast DB in the models folder and attempts to load it. Not sure whether that is causing the error.

Moffittk commented 1 year ago

I deleted my custom DB model that was loading and I got the same errors.

Moffittk commented 1 year ago

All size mismatches.

Moffittk commented 1 year ago

File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1605, in load_state_dict self.class.name, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for LatentInpaintDiffusion: size mismatch for model.diffusion_model.input_blocks.1.1.proj_in.weight: copying a param with shape torch.Size([320, 320, 1, 1]) from checkpoint, the shape in current model is torch.Size([320, 320]). size mismatch for model.diffusion_model.input_blocks.1.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([320, 768]) from checkpoint, the shape in current model is torch.Size([320, 1024]). size mismatch for model.diffusion_model.input_blocks.1.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([320, 768]) from checkpoint, the shape in current model is torch.Size([320, 1024]). size mismatch for model.diffusion_model.input_blocks.1.1.proj_out.weight: copying a param with shape torch.Size([320, 320, 1, 1]) from checkpoint, the shape in current model is torch.Size([320, 320]). size mismatch for model.diffusion_model.input_blocks.2.1.proj_in.weight: copying a param with shape torch.Size([320, 320, 1, 1]) from checkpoint, the shape in current model is torch.Size([320, 320]). size mismatch for model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([320, 768]) from checkpoint, the shape in current model is torch.Size([320, 1024]). size mismatch for model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([320, 768]) from checkpoint, the shape in current model is torch.Size([320, 1024]). size mismatch for model.diffusion_model.input_blocks.2.1.proj_out.weight: copying a param with shape torch.Size([320, 320, 1, 1]) from checkpoint, the shape in current model is torch.Size([320, 320]). size mismatch for model.diffusion_model.input_blocks.4.1.proj_in.weight: copying a param with shape torch.Size([640, 640, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 640]). size mismatch for model.diffusion_model.input_blocks.4.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([640, 768]) from checkpoint, the shape in current model is torch.Size([640, 1024]). size mismatch for model.diffusion_model.input_blocks.4.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([640, 768]) from checkpoint, the shape in current model is torch.Size([640, 1024]). size mismatch for model.diffusion_model.input_blocks.4.1.proj_out.weight: copying a param with shape torch.Size([640, 640, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 640]). size mismatch for model.diffusion_model.input_blocks.5.1.proj_in.weight: copying a param with shape torch.Size([640, 640, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 640]). size mismatch for model.diffusion_model.input_blocks.5.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([640, 768]) from checkpoint, the shape in current model is torch.Size([640, 1024]). size mismatch for model.diffusion_model.input_blocks.5.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([640, 768]) from checkpoint, the shape in current model is torch.Size([640, 1024]). size mismatch for model.diffusion_model.input_blocks.5.1.proj_out.weight: copying a param with shape torch.Size([640, 640, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 640]). size mismatch for model.diffusion_model.input_blocks.7.1.proj_in.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 1280]). size mismatch for model.diffusion_model.input_blocks.7.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 1024]). size mismatch for model.diffusion_model.input_blocks.7.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 1024]). size mismatch for model.diffusion_model.input_blocks.7.1.proj_out.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 1280]). size mismatch for model.diffusion_model.input_blocks.8.1.proj_in.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 1280]). size mismatch for model.diffusion_model.input_blocks.8.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 1024]). size mismatch for model.diffusion_model.input_blocks.8.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 1024]). size mismatch for model.diffusion_model.input_blocks.8.1.proj_out.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 1280]). size mismatch for model.diffusion_model.middle_block.1.proj_in.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 1280]). size mismatch for model.diffusion_model.middle_block.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 1024]). size mismatch for model.diffusion_model.middle_block.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 1024]). size mismatch for model.diffusion_model.middle_block.1.proj_out.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 1280]). size mismatch for model.diffusion_model.output_blocks.3.1.proj_in.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 1280]). size mismatch for model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 1024]). size mismatch for model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 1024]). size mismatch for model.diffusion_model.output_blocks.3.1.proj_out.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 1280]). size mismatch for model.diffusion_model.output_blocks.4.1.proj_in.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 1280]). size mismatch for model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 1024]). size mismatch for model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 1024]). size mismatch for model.diffusion_model.output_blocks.4.1.proj_out.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 1280]). size mismatch for model.diffusion_model.output_blocks.5.1.proj_in.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 1280]). size mismatch for model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 1024]). size mismatch for model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 1024]). size mismatch for model.diffusion_model.output_blocks.5.1.proj_out.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 1280]). size mismatch for model.diffusion_model.output_blocks.6.1.proj_in.weight: copying a param with shape torch.Size([640, 640, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 640]). size mismatch for model.diffusion_model.output_blocks.6.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([640, 768]) from checkpoint, the shape in current model is torch.Size([640, 1024]). size mismatch for model.diffusion_model.output_blocks.6.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([640, 768]) from checkpoint, the shape in current model is torch.Size([640, 1024]). size mismatch for model.diffusion_model.output_blocks.6.1.proj_out.weight: copying a param with shape torch.Size([640, 640, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 640]). size mismatch for model.diffusion_model.output_blocks.7.1.proj_in.weight: copying a param with shape torch.Size([640, 640, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 640]). size mismatch for model.diffusion_model.output_blocks.7.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([640, 768]) from checkpoint, the shape in current model is torch.Size([640, 1024]). size mismatch for model.diffusion_model.output_blocks.7.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([640, 768]) from checkpoint, the shape in current model is torch.Size([640, 1024]). size mismatch for model.diffusion_model.output_blocks.7.1.proj_out.weight: copying a param with shape torch.Size([640, 640, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 640]). size mismatch for model.diffusion_model.output_blocks.8.1.proj_in.weight: copying a param with shape torch.Size([640, 640, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 640]). size mismatch for model.diffusion_model.output_blocks.8.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([640, 768]) from checkpoint, the shape in current model is torch.Size([640, 1024]). size mismatch for model.diffusion_model.output_blocks.8.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([640, 768]) from checkpoint, the shape in current model is torch.Size([640, 1024]). size mismatch for model.diffusion_model.output_blocks.8.1.proj_out.weight: copying a param with shape torch.Size([640, 640, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 640]). size mismatch for model.diffusion_model.output_blocks.9.1.proj_in.weight: copying a param with shape torch.Size([320, 320, 1, 1]) from checkpoint, the shape in current model is torch.Size([320, 320]). size mismatch for model.diffusion_model.output_blocks.9.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([320, 768]) from checkpoint, the shape in current model is torch.Size([320, 1024]). size mismatch for model.diffusion_model.output_blocks.9.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([320, 768]) from checkpoint, the shape in current model is torch.Size([320, 1024]). size mismatch for model.diffusion_model.output_blocks.9.1.proj_out.weight: copying a param with shape torch.Size([320, 320, 1, 1]) from checkpoint, the shape in current model is torch.Size([320, 320]). size mismatch for model.diffusion_model.output_blocks.10.1.proj_in.weight: copying a param with shape torch.Size([320, 320, 1, 1]) from checkpoint, the shape in current model is torch.Size([320, 320]). size mismatch for model.diffusion_model.output_blocks.10.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([320, 768]) from checkpoint, the shape in current model is torch.Size([320, 1024]). size mismatch for model.diffusion_model.output_blocks.10.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([320, 768]) from checkpoint, the shape in current model is torch.Size([320, 1024]). size mismatch for model.diffusion_model.output_blocks.10.1.proj_out.weight: copying a param with shape torch.Size([320, 320, 1, 1]) from checkpoint, the shape in current model is torch.Size([320, 320]). size mismatch for model.diffusion_model.output_blocks.11.1.proj_in.weight: copying a param with shape torch.Size([320, 320, 1, 1]) from checkpoint, the shape in current model is torch.Size([320, 320]). size mismatch for model.diffusion_model.output_blocks.11.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([320, 768]) from checkpoint, the shape in current model is torch.Size([320, 1024]). size mismatch for model.diffusion_model.output_blocks.11.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([320, 768]) from checkpoint, the shape in current model is torch.Size([320, 1024]). size mismatch for model.diffusion_model.output_blocks.11.1.proj_out.weight: copying a param with shape torch.Size([320, 320, 1, 1]) from checkpoint, the shape in current model is torch.Size([320, 320]).

TheLastBen commented 1 year ago

you need to set it to V2 in the "download model" cell and rerun the requirements cell

Moffittk commented 1 year ago

Progress: deleted all models except model.ckpt in the models folder and got the UI link (not Gradio). Clicked on the link and the AI loaded but would not render any pics. Said OpenClip error and not passing values.

Moffittk commented 1 year ago

Redownloaded the model and reran the requirements. Same issue once I get the local tunnel link and click it, cannot generate any images. Says openclip error.

Moffittk commented 1 year ago

I am running Chrome on a Mac M1 BTW.

TheLastBen commented 1 year ago

I'll check it out

Moffittk commented 1 year ago

Hello, deleted all models in the webui model directory to make sure was using V2. Got this on the last step:

Loading weights [8015a14c] from /content/gdrive/MyDrive/sd/stable-diffusion-webui/models/Stable-diffusion/model.ckpt Applying cross attention optimization (Doggettx). Model loaded. Checkpoint model.ckpt [a2a802b2] not found; loading fallback model.ckpt [8015a14c] Error loading emedding KM_Alpha.pt: Traceback (most recent call last): File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/textual_inversion/textual_inversion.py", line 133, in load_textual_inversion_embeddings process_file(fullfn, fn) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/textual_inversion/textual_inversion.py", line 124, in process_file self.register_embedding(embedding, shared.sd_model) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/textual_inversion/textual_inversion.py", line 67, in register_embedding ids = model.cond_stage_model.tokenizer([embedding.name], add_special_tokens=False)['input_ids'][0] TypeError: 'SimpleTokenizer' object is not callable

Error loading emedding KM_Alpha2.pt: Traceback (most recent call last): File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/textual_inversion/textual_inversion.py", line 133, in load_textual_inversion_embeddings process_file(fullfn, fn) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/textual_inversion/textual_inversion.py", line 124, in process_file self.register_embedding(embedding, shared.sd_model) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/textual_inversion/textual_inversion.py", line 67, in register_embedding ids = model.cond_stage_model.tokenizer([embedding.name], add_special_tokens=False)['input_ids'][0] TypeError: 'SimpleTokenizer' object is not callable

Loaded a total of 2 textual inversion embeddings. Embeddings: KM_Alpha, KM_Alpha2 Running on local URL: https://clean-pets-kneel-34-125-29-84.loca.lt:443 Connected Error completing request Arguments: ('TOm Hardy', '', 'None', 'None', 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 0, 0, 0, False, False, False, '', 1, '', 0, '', True, True, False) {} Traceback (most recent call last): File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/ui.py", line 185, in f res = list(func(*args, kwargs)) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/webui.py", line 57, in f res = func(*args, *kwargs) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/txt2img.py", line 49, in txt2img processed = process_images(p) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/processing.py", line 430, in process_images res = process_images_inner(p) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/processing.py", line 520, in process_images_inner uc = prompt_parser.get_learned_conditioning(shared.sd_model, negative_prompts, p.steps) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/prompt_parser.py", line 138, in get_learned_conditioning conds = model.get_learned_conditioning(texts) File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/models/diffusion/ddpm.py", line 669, in get_learned_conditioning c = self.cond_stage_model(c) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(input, kwargs) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_hijack.py", line 307, in forward batch_multipliers, remade_batch_tokens, used_custom_terms, hijack_comments, hijack_fixes, token_count = self.process_text_old(text) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_hijack.py", line 238, in process_text_old id_start = self.wrapped.tokenizer.bos_token_id File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1208, in getattr type(self).name, name)) AttributeError: 'FrozenOpenCLIPEmbedder' object has no attribute 'tokenizer'

Moffittk commented 1 year ago

Error completing request Arguments: ('TOm Hardy', '', 'None', 'None', 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 0, 0, 0, False, False, False, '', 1, '', 0, '', True, True, False) {} Traceback (most recent call last): File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/ui.py", line 185, in f res = list(func(*args, kwargs)) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/webui.py", line 57, in f res = func(*args, *kwargs) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/txt2img.py", line 49, in txt2img processed = process_images(p) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/processing.py", line 430, in process_images res = process_images_inner(p) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/processing.py", line 520, in process_images_inner uc = prompt_parser.get_learned_conditioning(shared.sd_model, negative_prompts, p.steps) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/prompt_parser.py", line 138, in get_learned_conditioning conds = model.get_learned_conditioning(texts) File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/models/diffusion/ddpm.py", line 669, in get_learned_conditioning c = self.cond_stage_model(c) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(input, kwargs) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_hijack.py", line 307, in forward batch_multipliers, remade_batch_tokens, used_custom_terms, hijack_comments, hijack_fixes, token_count = self.process_text_old(text) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_hijack.py", line 238, in process_text_old id_start = self.wrapped.tokenizer.bos_token_id File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1208, in getattr type(self).name, name)) AttributeError: 'FrozenOpenCLIPEmbedder' object has no attribute 'tokenizer'

Moffittk commented 1 year ago

Not sure of the V2 checkpoint value but seems to be the V2 model unless the path is wrong for me.