pkuliyi2015 / multidiffusion-upscaler-for-automatic1111

Tiled Diffusion and VAE optimize, licensed under CC BY-NC-SA 4.0
Other
4.59k stars 330 forks source link

ValueError #385

Closed DenSckriva closed 2 months ago

DenSckriva commented 2 months ago

Hello !

Maybe you can help me with my error. Since updating automatic1111, I have been unable to generate an image with this tool.

"ValueError: Incompatible shapes for attention inputs: query.shape: torch.Size([2, 12288, 8, 40]) key.shape : torch.Size([1, 77, 8, 40]) value.shape: torch.Size([1, 77, 8, 40]) HINT: We don't support broadcasting, please use expand yourself before calling memory_efficient_attention if you need to"

I have try with and without xformer, same problem. :(

The complete console message :

[Tiled Diffusion] ControlNet found, support is enabled.
MultiDiffusion hooked into 'Euler a' sampler, Tile size: NonexNone, Tile count: None, Batch size: None, Tile batches: 0 (ext: RegionCtrl, ContrlNet)
  0%|          | 0/40 [00:00<?, ?it/s]t/s]
*** Error completing request
*** Arguments: ('task(lp6tbkajeuf29yj)', <gradio.routes.Request object at 0x000002111AA88580>, 'cartoon, score_9, score_8_up, score_7_up, score_6_up, score_5_up, score_4_up, masterpiece, best quality, 1female (toad lifeform:1.4), (detailed green moist skin), pink hair, sexiest poses', 'rt by bad-artist, lowres, text, error, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, bad anatomy, bad hands, ugly, morbid, extra fingers, missing fingers, extra digits, poorly drawn hands, mutation, extra limbs, gross proportions, missing arms, mutated hands, long neck, mutilated, mutilated hands, poorly drawn face, deformed, bad anatomy, malformed limbs, missing legs, fewer digits, extra tails, extra limbs, disembodied tail,  EasyNegativeV2', [], 40, 'Euler a', 1, 1, 4.5, 768, 1024, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], 0, False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, {'ad_model': 'face_yolov8n.pt', 'ad_model_classes': '', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, True, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 4, 'None', 2, False, 10, 1, 1, 64, False, True, False, False, True, 0, 0, 0.9999999999999999, 1, '(simple black background:1.1)', '', 'Background', 0.2, -1.0, True, 0, 0, 0.4500000000000002, 1, 'cartoon, score_9, score_8_up, score_7_up, score_6_up, score_5_up, score_4_up, masterpiece, best quality, 1female (toad lifeform:1.4), (detailed green moist skin), pink hair, sexiest poses', '', 'Foreground', 0.2, -1.0, True, 0.5562499999999998, 0, 0.44375, 1, 'cartoon, score_9, score_8_up, score_7_up, score_6_up, score_5_up, score_4_up, masterpiece, best quality, 1female (bunny lifeform:1.4), (detailed red fur), pink hair, sexiest poses', '', 'Foreground', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 'DemoFusion', False, 128, 64, 4, 2, False, 10, 1, 1, 64, False, True, 3, 1, 1, True, 0.85, 0.5, 4, False, False, 3072, 192, True, True, True, False, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), False, False, 'Matrix', 'Columns', 'Mask', 'Prompt', '1,1', '0.2', False, False, False, 'Attention', [False], '0', '0', '0.4', None, '0', '0', False, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, None, None, False, 50, [], 30, '', 4, [], 1, '', '', '', '', 'Positive', 0, ', ', 'Generate and always save', 32) {}
    Traceback (most recent call last):
      File "E:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "E:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "E:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\modules\txt2img.py", line 110, in txt2img
        processed = processing.process_images(p)
      File "E:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\modules\processing.py", line 785, in process_images
        res = process_images_inner(p)
      File "E:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 59, in processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
      File "E:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\modules\processing.py", line 921, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "E:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\modules\processing.py", line 1257, in sample
        samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
      File "E:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\modules\sd_samplers_kdiffusion.py", line 234, in sample
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "E:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\modules\sd_samplers_common.py", line 261, in launch_sampling
        return func()
      File "E:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\modules\sd_samplers_kdiffusion.py", line 234, in <lambda>
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "E:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "E:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral
        denoised = model(x, sigmas[i] * s_in, **extra_args)
      File "E:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "E:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\modules\sd_samplers_cfg_denoiser.py", line 237, in forward
        x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
      File "E:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "E:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "E:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\extensions\multidiffusion-upscaler-for-automatic1111\tile_utils\utils.py", line 252, in wrapper
        return fn(*args, **kwargs)
      File "E:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\extensions\multidiffusion-upscaler-for-automatic1111\tile_methods\multidiffusion.py", line 70, in kdiff_forward
        return self.sample_one_step(x_in, org_func, repeat_func, custom_func)
      File "E:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\extensions\multidiffusion-upscaler-for-automatic1111\tile_methods\multidiffusion.py", line 187, in sample_one_step
        x_tile_out = custom_func(x_tile, bbox_id, bbox)
      File "E:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\extensions\multidiffusion-upscaler-for-automatic1111\tile_methods\multidiffusion.py", line 68, in custom_func
        return self.kdiff_custom_forward(x, sigma_in, cond, bbox_id, bbox, self.sampler_forward)
      File "E:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\extensions\multidiffusion-upscaler-for-automatic1111\tile_utils\utils.py", line 252, in wrapper
        return fn(*args, **kwargs)
      File "E:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\extensions\multidiffusion-upscaler-for-automatic1111\tile_methods\abstractdiffusion.py", line 414, in kdiff_custom_forward
        return forward_func(
      File "E:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
        eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
      File "E:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
        return self.inner_model.apply_model(*args, **kwargs)
      File "E:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\modules\sd_hijack_utils.py", line 18, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
      File "E:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\modules\sd_hijack_utils.py", line 32, in __call__
        return self.__orig_func(*args, **kwargs)
      File "E:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
        x_recon = self.model(x_noisy, t, **cond)
      File "E:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "E:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
        out = self.diffusion_model(x, t, context=cc)
      File "E:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "E:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\modules\sd_unet.py", line 91, in UNetModel_forward
        return original_forward(self, x, timesteps, context, *args, **kwargs)
      File "E:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 797, in forward
        h = module(h, emb, context)
      File "E:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "E:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 84, in forward
        x = layer(x, context)
      File "E:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "E:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 334, in forward
        x = block(x, context=context[i])
      File "E:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "E:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 269, in forward
        return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
      File "E:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 121, in checkpoint
        return CheckpointFunction.apply(func, len(inputs), *args)
      File "E:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\autograd\function.py", line 539, in apply
        return super().apply(*args, **kwargs)  # type: ignore[misc]
      File "E:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 136, in forward
        output_tensors = ctx.run_function(*ctx.input_tensors)
      File "E:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 273, in _forward
        x = self.attn2(self.norm2(x), context=context) + x
      File "E:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "E:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\modules\sd_hijack_optimizations.py", line 496, in xformers_attention_forward
        out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None, op=get_xformers_flash_attention_op(q, k, v))
      File "E:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\venv\lib\site-packages\xformers\ops\fmha\__init__.py", line 223, in memory_efficient_attention
        return _memory_efficient_attention(
      File "E:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\venv\lib\site-packages\xformers\ops\fmha\__init__.py", line 321, in _memory_efficient_attention
        return _memory_efficient_attention_forward(
      File "E:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\venv\lib\site-packages\xformers\ops\fmha\__init__.py", line 334, in _memory_efficient_attention_forward
        inp.validate_inputs()
      File "E:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\venv\lib\site-packages\xformers\ops\fmha\common.py", line 197, in validate_inputs
        raise ValueError(
    ValueError: Incompatible shapes for attention inputs:
      query.shape: torch.Size([2, 12288, 8, 40])
      key.shape  : torch.Size([1, 77, 8, 40])
      value.shape: torch.Size([1, 77, 8, 40])
    HINT: We don't support broadcasting, please use `expand` yourself before calling `memory_efficient_attention` if you need to

Thanks for your help

DenSckriva commented 2 months ago

Little update If i have in prompt : "score_9, score_8_up, score_7_up, score_6_up, masterpiece" and negative : "bad-artist, boring_e621, lowres, bad anatomy, text, error, cropped, (low quality, worst quality:1.4), normal quality, jpeg artifacts, signature, watermark, username, blurry, monochrome, human,"

The module work perfectly.

With the same prompt but in negative : "rt by bad-artist, lowres, text, error, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, bad anatomy, bad hands, ugly, morbid, extra fingers, missing fingers, extra digits, poorly drawn hands, mutation, extra limbs, gross proportions, missing arms, mutated hands, long neck, mutilated, mutilated hands, poorly drawn face, deformed, bad anatomy, malformed limbs, missing legs, fewer digits, extra tails, extra limbs, disembodied tail, EasyNegativeV2"

Not work, ValueError: Incompatible shapes for attention inputs: blabla.

A rapid test show me a token limite in prompt (and neg prompt) at 75. <=75 it's OK >75 nope. I don't know why at this moment

yamosin commented 2 months ago

https://github.com/pkuliyi2015/multidiffusion-upscaler-for-automatic1111/issues/320#issuecomment-1866021623

I have same issue as you, and i follow this disable "Pad prompt/negative prompt to be same length" now its work, not sure if this work same for you

DavideAlidosi commented 6 days ago

I've found a definitive solution, to this issue, open config.json and set "batch_cond_uncond" on true