pkuliyi2015 / multidiffusion-upscaler-for-automatic1111

Tiled Diffusion and VAE optimize, licensed under CC BY-NC-SA 4.0
Other
4.78k stars 336 forks source link

Tiled VAE not working anymore since last update #195

Closed kriimakt closed 1 year ago

kriimakt commented 1 year ago

Getting this message when Tiled VAE is enabled :

`[Tiled Diffusion] upscaling image with 4x-UltraSharp... [Tiled Diffusion] ControlNet found, support is enabled. MixtureOfDiffusers Sampling: 0%| | 0/105 [00:00<?, ?it/s]Mixture of Diffusers hooked into 'DPM++ 2M Karras' sampler, Tile size: 128x128, Tile batches: 15, Batch size: 1. (ext: ContrlNet) [Tiled VAE]: input_size: torch.Size([1, 3, 2160, 3840]), tile_size: 1024, padding: 32 [Tiled VAE]: split to 3x4 = 12 tiles. Optimal tile size 960x704, original tile size 1024x1024 Error completing requestEncoder Task Queue: 82%|████████████████████████████▋ | 895/1092 [00:12<00:00, 488.82it/s] Arguments: ('task(ydqz1p0ow6c69if)', 0, '(best quality, masterpiece:1.2), (full body), (dynamic pose), 3D,an image of a cyberpunk woman human robot, long white hair and cape worn and torn floating in the wind,IvoryGoldAI,HDR (High Dynamic Range),Ray Tracing,NVIDIA RTX,Super-Resolution,Unreal 5,Subsurface scattering,PBR Texturing,Post-processing,Anisotropic Filtering,Depth-of-field,Maximum clarity and sharpness,Multi-layered textures,Albedo and Specular maps,Surface shading,Accurate simulation of light-material interaction,Perfect proportions,Octane Render,Two-tone lighting,Wide aperture,Low ISO,White balance,8K RAW, ', '(worst quality:2), (low quality:2), (normal quality:2), (lowres:2), bad anatomy, fat, ugly, cartoon, comic, sketch, anime, monochrome, logo, signature, watermark, deformed, mutation, text, cheerful, happy, snow, close up, portrait, zoom in, crop in, shallow depth of field, medium depth of field, subject in center, subject close to camera, imperfect circle, imperfect bolts, imperfect screws, open helmet, half helmet, large moon, 3d, realistic, mask, bad design, low detail, low texture quality, repetition, old, rusty, 1k, 2k, advntr, bad-picture-chill-75v, bad_prompt_version2-neg, badhandsv5-neg ,By bad artist', [], <PIL.Image.Image image mode=RGBA size=1280x720 at 0x20C9FCDD960>, None, None, None, None, None, None, 20, 15, 4, 0, 1, False, False, 1, 1, 7, 1.5, 0.3, -1.0, -1.0, 0, 0, 0, False, 1, 512, 512, 1, 0, 0, 32, 0, '', '', '', [], 0, True, 'Mixture of Diffusers', False, True, 1024, 1024, 128, 128, 16, 1, '4x-UltraSharp', 3, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, True, 1024, 96, True, False, False, False, <controlnet.py.UiControlNetUnit object at 0x0000020CA1467C70>, <controlnet.py.UiControlNetUnit object at 0x0000020CA1466E90>, False, '', 0.5, True, False, '', 'Lerp', False, '

\n', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '

Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8

', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '

Will upscale the image by the selected scale factor; use width and height sliders to set tile size

', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, None, False, None, False, 50, '

Will upscale the image depending on the selected target size type

', 512, 0, 8, 32, 64, 0.35, 32, 0, True, 0, False, 8, 0, 0, 2048, 2048, 2) {} Traceback (most recent call last): File "D:\UNSTABLE2\stable-diffusion-webui\modules\call_queue.py", line 57, in f res = list(func(*args, kwargs)) File "D:\UNSTABLE2\stable-diffusion-webui\modules\call_queue.py", line 37, in f res = func(*args, *kwargs) File "D:\UNSTABLE2\stable-diffusion-webui\modules\img2img.py", line 182, in img2img processed = process_images(p) File "D:\UNSTABLE2\stable-diffusion-webui\modules\processing.py", line 526, in process_images res = process_images_inner(p) File "D:\UNSTABLE2\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack return getattr(processing, '__controlnet_original_process_images_inner')(p, args, kwargs) File "D:\UNSTABLE2\stable-diffusion-webui\modules\processing.py", line 615, in process_images_inner p.init(p.all_prompts, p.all_seeds, p.all_subseeds) File "D:\UNSTABLE2\stable-diffusion-webui\modules\processing.py", line 1104, in init self.init_latent = self.sd_model.get_first_stage_encoding(self.sd_model.encode_first_stage(image)) File "D:\UNSTABLE2\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in setattr(resolved_obj, func_path[-1], lambda *args, kwargs: self(*args, *kwargs)) File "D:\UNSTABLE2\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in call return self.__orig_func(args, kwargs) File "D:\UNSTABLE2\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, kwargs) File "D:\UNSTABLE2\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 830, in encode_first_stage return self.first_stage_model.encode(x) File "D:\UNSTABLE2\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\autoencoder.py", line 83, in encode h = self.encoder(x) File "D:\UNSTABLE2\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "D:\UNSTABLE2\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\scripts\vae_optimize.py", line 381, in call return self.vae_tile_forward(x) File "D:\UNSTABLE2\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\scripts\vae_optimize.py", line 268, in wrapper ret = fn(args, kwargs) File "D:\UNSTABLE2\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "D:\UNSTABLE2\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\scripts\vae_optimize.py", line 612, in vae_tile_forward tile = task1 File "D:\UNSTABLE2\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\scripts\vae_optimize.py", line 106, in task_queue.append(('attn', lambda x, net=net: attn_forward(net, x))) File "D:\UNSTABLE2\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\tile_utils\attn.py", line 87, in xformers_attnblock_forward out = xformers.ops.memory_efficient_attention(q, k, v, op=get_xformers_flash_attention_op(q, k, v)) File "D:\UNSTABLE2\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha__init.py", line 196, in memory_efficient_attention return _memory_efficient_attention( File "D:\UNSTABLE2\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha__init__.py", line 294, in _memory_efficient_attention return _memory_efficient_attention_forward( File "D:\UNSTABLE2\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha\init__.py", line 307, in _memory_efficient_attention_forward inp.validate_inputs() File "D:\UNSTABLE2\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha\common.py", line 73, in validate_inputs raise ValueError( ValueError: Query/Key/Value should all have the same dtype query.dtype: torch.float32 key.dtype : torch.float32 value.dtype: torch.float16

[Tiled VAE]: Executing Encoder Task Queue: 85%|██████████████████████████████▍ | 925/1092 [00:12<00:02, 76.47it/s]`

kriimakt commented 1 year ago

add: It works if I add -no-half

kriimakt commented 1 year ago

Still no news ? I mean, using no-half reaslly slows down the generation...

felix-ky commented 1 year ago

same problem