kijai / ComfyUI-Marigold

Marigold depth estimation in ComfyUI
GNU General Public License v3.0
413 stars 17 forks source link

Error occurred when executing MarigoldDepthEstimation: "slow_conv2d_cpu" not implemented for 'Half' #12

Open lord-lethris opened 6 months ago

lord-lethris commented 6 months ago

Got the following error "out of the box"

Error occurred when executing MarigoldDepthEstimation:

"slow_conv2d_cpu" not implemented for 'Half'

File "D:\apps\SD-WebUI\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "D:\apps\SD-WebUI\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "D:\apps\SD-WebUI\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "D:\apps\SD-WebUI\ComfyUI\custom_nodes\ComfyUI-Marigold\nodes.py", line 123, in process
depth_maps_sub_batch = self.marigold_pipeline(sub_batch, num_inference_steps=denoise_steps, show_pbar=False)
File "D:\apps\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\apps\SD-WebUI\ComfyUI\custom_nodes\ComfyUI-Marigold\marigold\model\marigold_pipeline.py", line 211, in forward
rgb_latent = self.encode_rgb(rgb_in)
File "D:\apps\SD-WebUI\ComfyUI\custom_nodes\ComfyUI-Marigold\marigold\model\marigold_pipeline.py", line 287, in encode_rgb
rgb_latent = self.rgb_encoder(rgb_in) # [B, 4, h, w]
File "D:\apps\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\apps\SD-WebUI\ComfyUI\custom_nodes\ComfyUI-Marigold\marigold\model\rgb_encoder.py", line 30, in forward
return self.encode(rgb_in)
File "D:\apps\SD-WebUI\ComfyUI\custom_nodes\ComfyUI-Marigold\marigold\model\rgb_encoder.py", line 33, in encode
moments = self.rgb_encoder(rgb_in) # [B, 8, H/8, W/8]
File "D:\apps\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\apps\Python\Python310\lib\site-packages\torch\nn\modules\container.py", line 217, in forward
input = module(input)
File "D:\apps\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\apps\Python\Python310\lib\site-packages\diffusers\models\autoencoders\vae.py", line 143, in forward
sample = self.conv_in(sample)
File "D:\apps\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\apps\Python\Python310\lib\site-packages\torch\nn\modules\conv.py", line 463, in forward
return self._conv_forward(input, self.weight, self.bias)
File "D:\apps\Python\Python310\lib\site-packages\torch\nn\modules\conv.py", line 459, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,

Nothing else in the Workflow:

image

lord-lethris commented 6 months ago

Switched "fp16" off - now it seems to hang.

yossel777 commented 1 week ago

Switched "fp16" off - now it seems to hang.

were is the place that we need to switched fp16 off