kijai / ComfyUI-SUPIR

SUPIR upscaling wrapper for ComfyUI
Other
1.32k stars 74 forks source link

'MemoryEfficientAttnBlock' object has no attribute 'group_norm' #4

Closed tigerminx closed 5 months ago

tigerminx commented 5 months ago

Any suggestions on the error below?

[Tiled VAE]: Executing Encoder Task Queue: 84%|███████████████████████████▋ | 1526/1820 [00:07<00:00, 1131.84it/s]ERROR:root:!!! Exception during processing !!! ERROR:root:Traceback (most recent call last): File "D:\SD\ComfyUI\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\SD\ComfyUI\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\SD\ComfyUI\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\SD\ComfyUI\ComfyUI\custom_nodes\ComfyUI-SUPIR\nodes.py", line 126, in process samples = self.model.batchify_sample(resized_image[i].unsqueeze(0), captions_list, num_steps=steps, restoration_scale= restoration_scale, s_churn=s_churn, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\SD\ComfyUI\python_embeded\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "D:\SD\ComfyUI\ComfyUI\custom_nodes\ComfyUI-SUPIR\SUPIR\models\SUPIR_model.py", line 118, in batchify_sample _z = self.encode_first_stage_with_denoise(x, use_sample=False) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\SD\ComfyUI\python_embeded\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "D:\SD\ComfyUI\ComfyUI\custom_nodes\ComfyUI-SUPIR\SUPIR\models\SUPIR_model.py", line 56, in encode_first_stage_with_denoise h = self.first_stage_model.denoise_encoder(x) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\SD\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\SD\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl return forward_call(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\SD\ComfyUI\ComfyUI\custom_nodes\ComfyUI-SUPIR\SUPIR\utils\tilevae.py", line 703, in call return self.vae_tile_forward(x) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\SD\ComfyUI\ComfyUI\custom_nodes\ComfyUI-SUPIR\SUPIR\utils\tilevae.py", line 586, in wrapper ret = fn(args, kwargs) ^^^^^^^^^^^^^^^^^^^ File "D:\SD\ComfyUI\python_embeded\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "D:\SD\ComfyUI\ComfyUI\custom_nodes\ComfyUI-SUPIR\SUPIR\utils\tilevae.py", line 938, in vae_tile_forward tile = task1 ^^^^^^^^^^^^^ File "D:\SD\ComfyUI\ComfyUI\custom_nodes\ComfyUI-SUPIR\SUPIR\utils\tilevae.py", line 374, in task_queue.append(('attn', lambda x, net=net: attn_forward_new_pt2_0(net, x))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\SD\ComfyUI\ComfyUI\custom_nodes\ComfyUI-SUPIR\SUPIR\utils\tilevae.py", line 197, in attn_forward_new_pt2_0 if self.group_norm is not None: ^^^^^^^^^^^^^^^ File "D:\SD\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1688, in getattr raise AttributeError(f"'{type(self).name}' object has no attribute '{name}'") AttributeError: 'MemoryEfficientAttnBlock' object has no attribute 'group_norm'

Prompt executed in 8.43 seconds [Tiled VAE]: Executing Encoder Task Queue: 85%|████████████████████████████▊ | 1541/1820 [00:07<00:01, 202.36it/s]

kijai commented 5 months ago

I have not ran into this one yet, does it work without tiled_vae or if you change the tile size?

padphone commented 5 months ago

I have not encountered this one yet, does it function without tiled_vae or if you alter the tile size? I also have the same problem, whether setting tile_vae to false or true, as well as changing the title size

tigerminx commented 5 months ago

It fails with tile_vae set to false as well. Have not tried changing the tile size. The default seemed low, if anything I would raise the tile size as it;s using SDXL resolutions correct?.

kijai commented 5 months ago

It fails with tile_vae set to false as well. Have not tried changing the tile size. The default seemed low, if anything I would raise the tile size as it;s using SDXL resolutions correct?.

How big tiles you can use on how much VRAM you have. Just as a sanity check... are you able to run a 512x512 image through with scable_by being 1.0 ?

tigerminx commented 5 months ago

I have 32GB system RAM, and a 3090 24GB VRAM. Let me try as you suggest above with the 512x512 image at a scale of x1. The other thing I should have mentioned is I am still using pytorch cross attention. Is xformers mandatory for this to work?

tigerminx commented 5 months ago

I tried with a 512x512 as a scale of x1

[Tiled VAE]: the input size is tiny and unnecessary to tile. ERROR:root:!!! Exception during processing !!! ERROR:root:Traceback (most recent call last): File "D:\SD\ComfyUI\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\SD\ComfyUI\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\SD\ComfyUI\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\SD\ComfyUI\ComfyUI\custom_nodes\ComfyUI-SUPIR\nodes.py", line 165, in process samples = self.model.batchify_sample(resized_image[i].unsqueeze(0), captions_list, num_steps=steps, restoration_scale= restoration_scale, s_churn=s_churn, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\SD\ComfyUI\python_embeded\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "D:\SD\ComfyUI\ComfyUI\custom_nodes\ComfyUI-SUPIR\SUPIR\models\SUPIR_model.py", line 118, in batchify_sample _z = self.encode_first_stage_with_denoise(x, use_sample=False) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\SD\ComfyUI\python_embeded\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "D:\SD\ComfyUI\ComfyUI\custom_nodes\ComfyUI-SUPIR\SUPIR\models\SUPIR_model.py", line 56, in encode_first_stage_with_denoise h = self.first_stage_model.denoise_encoder(x) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\SD\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\SD\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl return forward_call(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\SD\ComfyUI\ComfyUI\custom_nodes\ComfyUI-SUPIR\SUPIR\utils\tilevae.py", line 701, in call return self.net.original_forward(x) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\SD\ComfyUI\ComfyUI\custom_nodes\ComfyUI-SUPIR\sgm\modules\diffusionmodules\model.py", line 589, in forward h = self.mid.attn_1(h) ^^^^^^^^^^^^^^^^^^ File "D:\SD\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl return self._call_impl(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\SD\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\SD\ComfyUI\ComfyUI\customnodes\ComfyUI-SUPIR\sgm\modules\diffusionmodules\model.py", line 260, in forward h = self.attention(h_) ^^^^^^^^^^^^^^^^^^ File "D:\SD\ComfyUI\ComfyUI\custom_nodes\ComfyUI-SUPIR\sgm\modules\diffusionmodules\model.py", line 246, in attention out = xformers.ops.memory_efficient_attention( ^^^^^^^^ NameError: name 'xformers' is not defined

Prompt executed in 176.91 seconds

kijai commented 5 months ago

Seems like it does require xformers then, which is annoying. If you don't have experience installing it, you could try:

python_embeded\python.exe -m pip install -U xformers --no-dependencies

Doing it without "no-dependencies" tends to also reinstall your torch and often (on Windows) without gpu support, which is why I don't really want to add it to requirements.

tigerminx commented 5 months ago

Thanks for the prompt responses. I have some fairly basic knowledge of installing / uninstalling torch. I am on Windows, and what you mentioned is exactly what's happening I think.

I tried installing xformers earlier, however it seems to want to downgrade my current version of "Torch version: 2.2.1+cu121" but then I get an error when trying to launch ComfyUI, "torch not compiled with CUDA". I don't know how to install the required version with CUDA support.

tigerminx commented 5 months ago

So it definitely wants Xformers. Had an old build of Comfy before they dropped Xformers. And it's working. Thanks for pointing me in the right direction. Is there any way to limit the excessive System Ram, and paging to disk once system RAM is depleted?

kijai commented 5 months ago

So it definitely wants Xformers. Had an old build of Comfy before they dropped Xformers. And it's working. Thanks for pointing me in the right direction. Is there any way to limit the excessive System Ram, and paging to disk once system RAM is depleted?

I don't know about the RAM requirement as of yet, this isn't very optimized overall. The original repo states 60GB as RAM "requirement", which obviously can't be correct but tells that it's gonna be pretty high anyway.