kijai / ComfyUI-SUPIR

SUPIR upscaling wrapper for ComfyUI
Other
1.23k stars 66 forks source link

Error occurred when executing SUPIR_Upscale: Input type (struct c10::Half) and bias type (float) should be the same #29

Closed administlx closed 3 months ago

administlx commented 4 months ago

Error occurred when executing SUPIR_Upscale:

Input type (struct c10::Half) and bias type (float) should be the same

I met this problem, how to solve it? ![Uploading 微信图片_20240304133523.png…]()

kijai commented 4 months ago

That image is not loading for me. Which GPU do you have?

lalalabush commented 4 months ago

That image is not loading for me. Which GPU do you have?

I tested T4, V100, fp32 — the error, bf16 — not supported. A100 bf16 — works great.

kijai commented 4 months ago

Ok, found an error, using fp32 VAE should work again.

administlx commented 4 months ago

该图像没有为我加载。你有哪个GPU?

2080TI,22G,,Excuse me, where should I change it to FP32 vae? I'm a newcomer and I don't know much about it. Thank you for your answer.

kijai commented 4 months ago

该图像没有为我加载。你有哪个GPU?

2080TI,22G,,Excuse me, where should I change it to FP32 vae? I'm a newcomer and I don't know much about it. Thank you for your answer.

It's the "encoder_dtype", on auto it should also be detected now with latest version, it probably is as you get the error. Does it still error out with latest version?

discordinated commented 4 months ago

I get the same or simialr error on MPS:

Error occurred when executing SUPIR_Upscale:

Input type (c10::Half) and bias type (float) should be the same

File "/Users/s/ComfyUI/execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/s/ComfyUI/execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/s/ComfyUI/execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/s/ComfyUI/custom_nodes/ComfyUI-SUPIR/nodes.py", line 242, in process
samples = self.model.batchify_sample(imgs, caps, num_steps=steps,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/s/miniconda3/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/s/ComfyUI/custom_nodes/ComfyUI-SUPIR/SUPIR/models/SUPIR_model.py", line 127, in batchify_sample
_z = self.encode_first_stage_with_denoise(x, use_sample=False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/s/miniconda3/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/s/ComfyUI/custom_nodes/ComfyUI-SUPIR/SUPIR/models/SUPIR_model.py", line 62, in encode_first_stage_with_denoise
h = self.first_stage_model.denoise_encoder(x)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/s/miniconda3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1519, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/s/miniconda3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1528, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/s/ComfyUI/custom_nodes/ComfyUI-SUPIR/sgm/modules/diffusionmodules/model.py", line 593, in forward
hs = [self.conv_in(x)]
^^^^^^^^^^^^^^^
File "/Users/s/miniconda3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1519, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/s/miniconda3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1528, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/s/miniconda3/lib/python3.11/site-packages/torch/nn/modules/conv.py", line 460, in forward
return self._conv_forward(input, self.weight, self.bias)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/s/miniconda3/lib/python3.11/site-packages/torch/nn/modules/conv.py", line 456, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
discordinated commented 4 months ago

Fixed by adding --force-fp32 to launch

python main.py --listen --force-fp32

administlx commented 4 months ago

该图像没有为我加载。您有哪个GPU?

2080TI,22G,,请问哪里应该改成FP32 vae?我是新人,对此了解不多。谢谢您的回答。

它是“encoder_dtype”,在自动模式下,现在也应该使用最新版本检测到它,它可能是在您收到错误时。最新版本还报错吗?

Yes, I have upgraded to the latest comfyui kernel, and I still report this error. I don't know what to do.

administlx commented 4 months ago

Fixed by adding --force-fp32 to launch

python main.py --listen --force-fp32 I don't know where to operate your code specifically.

kijai commented 3 months ago

Is this still an issue with latest commit?

administlx commented 3 months ago

这仍然是最新提交的问题吗?

After updating the new version of plug-in, it has been solved. Thank you very much.

kijai commented 3 months ago

Glad to hear!

vkleinmp commented 2 months ago

No it hasn't been solved. Many of us can't just use --force-fp32, because of the VRAM implications for all other used workflows. Could you please make your

kijai commented 2 months ago

No it hasn't been solved. Many of us can't just use --force-fp32, because of the VRAM implications for all other used workflows. Could you please make your

This is why there are manual dtype selections on the nodes? Only the auto option should obey the comfy arguments.