lllyasviel / sd-forge-layerdiffuse

[WIP] Layer Diffusion for WebUI (via Forge)
Apache License 2.0
3.86k stars 331 forks source link

"Additional Prompt" gives TypeError: 'NoneType' object is not iterable #61

Open bews opened 8 months ago

bews commented 8 months ago

Filling "Foreground Additional Prompt" or "Blended Additional Prompt" always gives TypeError: 'NoneType' object is not iterable. It works only when leaving them empty. 2024-03-09_12-27-22

Edit: It looks like the problem only occurs with longer prompts, basic 2-3 word prompts (positive and negative) work. Edit2: over 150 symbols prompts don't work.

mattmin45 commented 8 months ago

I've found a workaround in the comments of this video. Not sure but may not related to the numbers of words of your prompt.

The image dimensions you generate have to be divisible by 64.

bews commented 8 months ago

I've found a workaround in the comments of this video. Not sure but may not related to the numbers of words of your prompt.

The image dimensions you generate have to be divisible by 64.

This is another problem, different from the image dimensions. I was even using demo image from the sanity check and it is working if prompts (both positive and negative) are short.

yingw commented 8 months ago

I've found a workaround in the comments of this video. Not sure but may not related to the numbers of words of your prompt.

The image dimensions you generate have to be divisible by 64.

Same here, and use the dimensions divisible by 64 works. Thank you. (SD 1.5, prompt just : 1girl )

yincangshiwei commented 8 months ago

I've found a workaround in the comments of this video. Not sure but may not related to the numbers of words of your prompt.

The image dimensions you generate have to be divisible by 64.

image

wangwenqiao666 commented 7 months ago

Error parsing "layerdiffusion_fg_additional_prompt: " Error parsing "layerdiffusion_bg_additional_prompt: " Error parsing "layerdiffusion_blend_additional_prompt: " [Layer Diffusion] LayerMethod.FG_ONLY_ATTN_SD15 To load target model BaseModel Begin to load 1 model [Memory Management] Current Free GPU Memory (MB) = 15789.16015625 [Memory Management] Model Memory (MB) = 1639.4137649536133 [Memory Management] Minimal Inference Memory (MB) = 1024.0 [Memory Management] Estimated Remaining GPU Memory (MB) = 13125.746391296387 Moving model(s) has taken 0.18 seconds

0%| | 0/20 [00:00<?, ?it/s] 0%| | 0/20 [00:00<?, ?it/s] Traceback (most recent call last): File "/data/stable-diffusion-webui-forge/modules_forge/main_thread.py", line 37, in loop task.work() File "/data/stable-diffusion-webui-forge/modules_forge/main_thread.py", line 26, in work self.result = self.func(*self.args, self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/stable-diffusion-webui-forge/modules/txt2img.py", line 111, in txt2img_function processed = processing.process_images(p) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/stable-diffusion-webui-forge/modules/processing.py", line 752, in process_images res = process_images_inner(p) ^^^^^^^^^^^^^^^^^^^^^^^ File "/data/stable-diffusion-webui-forge/modules/processing.py", line 922, in process_images_inner samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/stable-diffusion-webui-forge/modules/processing.py", line 1275, in sample samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/stable-diffusion-webui-forge/modules/sd_samplers_kdiffusion.py", line 251, in sample samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, extra_params_kwargs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/stable-diffusion-webui-forge/modules/sd_samplers_common.py", line 263, in launch_sampling return func() ^^^^^^ File "/data/stable-diffusion-webui-forge/modules/sd_samplers_kdiffusion.py", line 251, in samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, extra_params_kwargs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/stable-diffusion-webui-forge/venv/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/data/stable-diffusion-webui-forge/repositories/k-diffusion/k_diffusion/sampling.py", line 626, in sample_dpmpp_2m_sde denoised = model(x, sigmas[i] s_in, extra_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/stable-diffusion-webui-forge/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/stable-diffusion-webui-forge/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/stable-diffusion-webui-forge/modules/sd_samplers_cfg_denoiser.py", line 182, in forward denoised = forge_sampler.forge_sample(self, denoiser_params=denoiser_params, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/stable-diffusion-webui-forge/modules_forge/forge_sampler.py", line 88, in forge_sample denoised = sampling_function(model, x, timestep, uncond, cond, cond_scale, model_options, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/stable-diffusion-webui-forge/ldm_patched/modules/samplers.py", line 289, in sampling_function cond_pred, uncond_pred = calc_cond_uncondbatch(model, cond, uncond, x, timestep, model_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/stable-diffusion-webui-forge/ldm_patched/modules/samplers.py", line 258, in calc_cond_uncond_batch output = model.apply_model(inputx, timestep, c).chunk(batch_chunks) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/stable-diffusion-webui-forge/ldm_patched/modules/model_base.py", line 90, in apply_model model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, *extra_conds).float() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/stable-diffusion-webui-forge/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/stable-diffusion-webui-forge/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/stable-diffusion-webui-forge/ldm_patched/ldm/modules/diffusionmodules/openaimodel.py", line 867, in forward h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/stable-diffusion-webui-forge/ldm_patched/ldm/modules/diffusionmodules/openaimodel.py", line 55, in forward_timestep_embed x = layer(x, context, transformer_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/stable-diffusion-webui-forge/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/stable-diffusion-webui-forge/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/stable-diffusion-webui-forge/ldm_patched/ldm/modules/attention.py", line 620, in forward x = block(x, context=context[i], transformer_options=transformer_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/stable-diffusion-webui-forge/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/stable-diffusion-webui-forge/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/stable-diffusion-webui-forge/ldm_patched/ldm/modules/attention.py", line 447, in forward return checkpoint(self._forward, (x, context, transformer_options), self.parameters(), self.checkpoint) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/stable-diffusion-webui-forge/ldm_patched/ldm/modules/diffusionmodules/util.py", line 194, in checkpoint return func(inputs) ^^^^^^^^^^^^^ File "/data/stable-diffusion-webui-forge/ldm_patched/ldm/modules/attention.py", line 507, in _forward n = self.attn1(n, context=context_attn1, value=value_attn1) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/stable-diffusion-webui-forge/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/stable-diffusion-webui-forge/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/stable-diffusion-webui-forge/extensions/sd-forge-layerdiffuse/lib_layerdiffusion/attention_sharing.py", line 92, in forward framed_cond_mark = einops.rearrange(transformer_options['cond_mark'], '(b f) -> f b', f=self.frames).to(modified_hidden_states)


KeyError: 'cond_mark'
'cond_mark'
*** Error completing request
*** Arguments: ('task(shs7sfb6ljjcptu)', <gradio.routes.Request object at 0x7fdf220578d0>, '1dog,high quality,', 'nsfw,bad,', [], 20, 'DPM++ 2M SDE Karras', 1, 1, 7, 1024, 1024, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], 0, False, '', 0.8, 3168306763, False, -1, 0, 0, 0, False, False, {'ad_model': 'mediapipe_face_full', 'ad_model_classes': '', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, True, '(SD1.5) Only Generate Transparent Image (Attention Injection)', 1, 1, None, None, None, 'Crop and Resize', False, '', '', '', ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), False, 7, 1, 'Constant', 0, 'Constant', 0, 1, 'enable', 'MEAN', 'AD', 1, False, 1.01, 1.02, 0.99, 0.95, False, 0.5, 2, False, 256, 2, 0, False, False, 3, 2, 0, 0.35, True, 'bicubic', 'bicubic', False, 0, 'anisotropic', 0, 'reinhard', 100, 0, 'subtract', 0, 0, 'gaussian', 'add', 0, 100, 127, 0, 'hard_clamp', 5, 0, 'None', 'None', False, 'MultiDiffusion', 768, 768, 64, 4, False, False, False, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "/data/stable-diffusion-webui-forge/modules/call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^
    TypeError: 'NoneType' object is not iterable

---
cha0sbuster commented 7 months ago

Unfortunately, 'NoneType' object is not iterable is the least helpful error stable-diffusion-webui and its offspring can give, and to know what your specific problem is, you have to scroll up further into the logs.

Your issue is similar to the one I was having, though, so I actually can weigh in. I think it ultimately comes down to stable-diffusion-webui's prompt parsing, or an interaction with it.

When your prompt is larger than 75 tokens, it splits the prompt into two and generates two conditionings that it ultimately uses some combination of. When generating each image, Layer Diffusion staples the additional prompt for that image onto the base prompt. When the base prompt and the additional prompt are larger than 75 tokens, something goes pear-shaped, and it generates conditionings of different sizes, so doing math on them no longer makes sense.

The obvious workaround for the moment is to put the additional prompt into the base prompt and make sure that each works out to <75 tokens. Whether or not this works when all the prompts are within the same multiple of 75, I would actually have to go and find out myself. The sd-webui setting for prompt padding may also help.