pkuliyi2015 / multidiffusion-upscaler-for-automatic1111

Tiled Diffusion and VAE optimize, licensed under CC BY-NC-SA 4.0
Other
4.59k stars 330 forks source link

Tile Diffusion + Controlnet Tile Error - RecursionError: maximum recursion depth exceeded while calling a Python object #342

Open Vigilence opened 6 months ago

Vigilence commented 6 months ago

I receive the following error when using img2img combined with tile diffusion and controlnet tile. I noticed that if I don't use an upscaler in the Tiled Diffusion section that I can proceed with enlarging the image. If I have it enabled then I receive the following error RecursionError: maximum recursion depth exceeded while calling a Python object. The upscaler being used is 4x-UltraSharp but it happens with other upscalers as well.

The error itself causes automatic1111 to no longer work, requiring me to manually close the cmd window and relaunch. I also wanted to mention that I disabled all extensions and restarted automatic1111 and the issue persists, so its not an extension causing this error.

Here is what I am using: GPU 4090 Automatic1111 version: [v1.7.0-329-g85bf2eb4] python: 3.10.6   torch: 2.1.2+cu121 xformers: 0.0.23.post1 gradio: 3.41.2 checkpoint: [463d6a9fe8]

Screenshot 2024-01-10 095412 Screenshot 2024-01-10 095430 Screenshot 2024-01-10 095457 Screenshot 2024-01-10 095508

2024-01-10 09:49:44,186 - ControlNet - INFO - unit_separate = False, style_align = False
2024-01-10 09:49:44,187 - ControlNet - INFO - Loading model from cache: control_v11f1e_sd15_tile [a371b31b]
2024-01-10 09:49:44,299 - ControlNet - INFO - Loading preprocessor: tile_resample
2024-01-10 09:49:44,299 - ControlNet - INFO - preprocessor resolution = -1
2024-01-10 09:49:44,340 - ControlNet - INFO - ControlNet Hooked - Time = 0.15726709365844727
[Tiled VAE]: the input size is tiny and unnecessary to tile.
*** Error completing request
*** Arguments: ('task(re74g1eu1x6ip51)', 0, '(OIL PAINTING),(IMPRESSIONISM), masterpiece, <lora:Oil painting(oil brush stroke)v1-Bichu:1.4>, bichu, ocean, sand, daytime, waves, ultra quality, intricate details, 12k, 8k,', 'FastNegativeV2,(bad-artist:1.0), (loli:1.2), (worst quality, low quality:1.4), (bad_prompt_version2:0.8), bad-hands-5,lowres, bad anatomy, bad hands, ((text)), (watermark), error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, ((username)), blurry, (extra limbs), bad-artist-anime, badhandv4, EasyNegative, ng_deepnegative_v1_75t, verybadimagenegative_v1.3, BadDream,(three hands:1.1),(three legs:1.1),(more than two hands:1.4),(more than two legs,:1.2),signature, rocks, boats, people, man, boy, girl, woman, rock, pebble, hill, boat, ship, rowboat, person, men, person, kayak', [], <PIL.Image.Image image mode=RGBA size=2560x1696 at 0x22F4A399A80>, None, None, None, None, None, None, 150, 'DPM++ 2M SDE Exponential', 4, 0, 1, 1, 1, 10, 1.5, 0.15, 1, 1696, 2560, 1, 0, 0, 32, 0, '', '', '', [], False, [], '', <gradio.routes.Request object at 0x0000022F5EBC8FA0>, 0, False, 1, 0.5, 4, 0, 0.5, 2, False, '', 0.8, -1, False, -1, 0, 0, 0, True, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 8, '4x-UltraSharp', 2, True, 10, 1, 0.15, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, True, 3072, 192, True, True, True, False, False, False, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, None, 'Refresh models', UiControlNetUnit(enabled=True, module='tile_resample', model='control_v11f1e_sd15_tile [a371b31b]', weight=1, image=None, resize_mode='Crop and Resize', low_vram=True, processor_res=-1, threshold_a=1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=64, threshold_a=64, threshold_b=64, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=64, threshold_a=64, threshold_b=64, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), '', '', '* `CFG Scale` should be 2 or lower.', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, 'start', '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50, '<p style="margin-bottom:0.75em">Will upscale the image depending on the selected target size type</p>', 512, 0, 8, 32, 64, 0.35, 32, 0, True, 0, False, 8, 0, 0, 2048, 2048, 2) {}
    Traceback (most recent call last):
      File "I:\stable-diffusion-webui\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "I:\stable-diffusion-webui\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "I:\stable-diffusion-webui\modules\img2img.py", line 235, in img2img
        processed = process_images(p)
      File "I:\stable-diffusion-webui\modules\processing.py", line 782, in process_images
        res = process_images_inner(p)
      File "I:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
      File "I:\stable-diffusion-webui\modules\processing.py", line 852, in process_images_inner
        p.init(p.all_prompts, p.all_seeds, p.all_subseeds)
      File "I:\stable-diffusion-webui\modules\processing.py", line 1639, in init
        self.init_latent = images_tensor_to_samples(image, approximation_indexes.get(opts.sd_vae_encode_method), self.sd_model)
      File "I:\stable-diffusion-webui\modules\sd_samplers_common.py", line 110, in images_tensor_to_samples
        x_latent = model.get_first_stage_encoding(model.encode_first_stage(image))
      File "I:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 18, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
      File "I:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 32, in __call__
        return self.__orig_func(*args, **kwargs)
      File "I:\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "I:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 830, in encode_first_stage
        return self.first_stage_model.encode(x)
      File "I:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\autoencoder.py", line 83, in encode
        h = self.encoder(x)
      File "I:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "I:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "I:\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\scripts\tilevae.py", line 379, in __call__
        return self.net.original_forward(x)
      File "I:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\model.py", line 523, in forward
        hs = [self.conv_in(x)]
      File "I:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "I:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "I:\stable-diffusion-webui\modules\devices.py", line 147, in forward_wrapper
        result = self.org_forward(*args, **kwargs)
      File "I:\stable-diffusion-webui\modules\devices.py", line 147, in forward_wrapper
        result = self.org_forward(*args, **kwargs)
      File "I:\stable-diffusion-webui\modules\devices.py", line 147, in forward_wrapper
        result = self.org_forward(*args, **kwargs)
      [Previous line repeated 962 more times]
      File "I:\stable-diffusion-webui\modules\devices.py", line 144, in forward_wrapper
        org_dtype = torch_utils.get_param(self).dtype
      File "I:\stable-diffusion-webui\modules\torch_utils.py", line 14, in get_param
        for param in model.parameters():
      File "I:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 2192, in parameters
        for name, param in self.named_parameters(recurse=recurse):
      File "I:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 2226, in named_parameters
        yield from gen
      File "I:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 2166, in _named_members
        memo.add(v)
      File "I:\stable-diffusion-webui\venv\lib\site-packages\torch\_tensor.py", line 1007, in __hash__
        return id(self)
    RecursionError: maximum recursion depth exceeded while calling a Python object

---
n0kovo commented 5 months ago

+1