ltdrdata / ComfyUI-Impact-Pack

Custom nodes pack for ComfyUI This custom node helps to conveniently enhance images through Detector, Detailer, Upscaler, Pipe, and more.
GNU General Public License v3.0
1.78k stars 165 forks source link

Issue with error message :2D or 3D (batch mode) at::Tensor expected for input #640

Open lntlky7 opened 3 months ago

lntlky7 commented 3 months ago

Using AMD CPU, an error occurred during operation. Please help to check.

Error occurred when executing FaceDetailer:

2D or 3D (batch mode) at::Tensor expected for input

File "D:\AITools\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "D:\AITools\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "D:\AITools\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "D:\AITools\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\impact_pack.py", line 553, in doit
enhanced_img, cropped_enhanced, cropped_enhanced_alpha, mask, cnet_pil_list = FaceDetailer.enhance_face(
File "D:\AITools\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\impact_pack.py", line 492, in enhance_face
sam_mask = core.make_sam_mask(sam_model_opt, segs, image, sam_detection_hint, sam_dilation,
File "D:\AITools\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\core.py", line 690, in make_sam_mask
detected_masks = sam_obj.predict(image, points, plabs, dilated_bbox, threshold)
File "D:\AITools\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\core.py", line 564, in predict
predictor.set_image(image, "RGB")
File "D:\Program Files\Python310\lib\site-packages\segment_anything\predictor.py", line 60, in set_image
self.set_torch_image(input_image_torch, image.shape[:2])
File "D:\Program Files\Python310\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\Program Files\Python310\lib\site-packages\segment_anything\predictor.py", line 89, in set_torch_image
self.features = self.model.image_encoder(input_image)
File "D:\Program Files\Python310\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\Program Files\Python310\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Program Files\Python310\lib\site-packages\segment_anything\modeling\image_encoder.py", line 112, in forward
x = blk(x)
File "D:\Program Files\Python310\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\Program Files\Python310\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Program Files\Python310\lib\site-packages\segment_anything\modeling\image_encoder.py", line 174, in forward
x = self.attn(x)
File "D:\Program Files\Python310\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\Program Files\Python310\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Program Files\Python310\lib\site-packages\segment_anything\modeling\image_encoder.py", line 227, in forward
qkv = self.qkv(x).reshape(B, H * W, 3, self.num_heads, -1).permute(2, 0, 3, 1, 4)
File "D:\Program Files\Python310\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\Program Files\Python310\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Program Files\Python310\lib\site-packages\torch\nn\modules\linear.py", line 116, in forward
return F.linear(input, self.weight, self.bias)
ltdrdata commented 3 months ago

What is your workflow?