ZHO-ZHO-ZHO / ComfyUI-YoloWorld-EfficientSAM

Unofficial implementation of YOLO-World + EfficientSAM for ComfyUI
GNU General Public License v3.0
561 stars 51 forks source link

Error occurred when executing Yoloworld_ESAM_Zho: The following operation failed in the TorchScript interpreter. #74

Open 0002kgHg opened 1 month ago

0002kgHg commented 1 month ago

Error occurred when executing Yoloworld_ESAM_Zho:

The following operation failed in the TorchScript interpreter. Traceback of TorchScript (most recent call last): RuntimeError: vector::_M_range_check: __n (which is 18446744073709551615) >= this->size() (which is 3)

File "/comfyui/execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "/comfyui/execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "/comfyui/execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(slice_dict(input_data_all, i))) File "/comfyui/custom_nodes/ComfyUI-YoloWorld-EfficientSAM/YOLO_WORLD_EfficientSAM.py", line 149, in yoloworld_esam_image detections.mask = inference_with_boxes( File "/comfyui/custom_nodes/ComfyUI-YoloWorld-EfficientSAM/utils/efficient_sam.py", line 59, in inference_with_boxes mask = inference_with_box(image, box, model, device) File "/comfyui/custom_nodes/ComfyUI-YoloWorld-EfficientSAM/utils/efficient_sam.py", line 28, in inference_with_box predicted_logits, predicted_iou = model( File "/root/anaconda3/envs/ComfyUI/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "/root/anaconda3/envs/ComfyUI/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(args, kwargs)

更新完用不聊了。

Alex-Ning commented 1 month ago

Try disable ComfyUI-Easy-Use

xueqing0622 commented 1 month ago

Easy use是不能卸载的系列,有没有其他办法

0002kgHg commented 1 month ago

I don't know what happened, but it's already useable.

xueqing0622 commented 1 month ago

I don't know what happened, but it's already useable.

怎么修复的

0002kgHg commented 1 month ago

I don't know what happened, but it's already useable.

怎么修复的

我弄半天没修复,后面就没搞这个工作流了。

LeoMusk commented 1 month ago

一样的问题,有解么?

chaohuei2020 commented 3 weeks ago

周大哥,怎那么解决这个问题啊

Error occurred when executing Yoloworld_ESAM_Zho:

The following operation failed in the TorchScript interpreter. Traceback of TorchScript (most recent call last): RuntimeError: invalid vector subscript

File "D:\comfyui-aik\ComfyUI-aki-v1.3\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "D:\comfyui-aik\ComfyUI-aki-v1.3\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "D:\comfyui-aik\ComfyUI-aki-v1.3\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(slice_dict(input_data_all, i))) File "D:\comfyui-aik\ComfyUI-aki-v1.3\custom_nodes\ComfyUI-YoloWorld-EfficientSAM\YOLO_WORLD_EfficientSAM.py", line 149, in yoloworld_esam_image detections.mask = inference_with_boxes( File "D:\comfyui-aik\ComfyUI-aki-v1.3\custom_nodes\ComfyUI-YoloWorld-EfficientSAM\utils\efficient_sam.py", line 59, in inference_with_boxes mask = inference_with_box(image, box, model, device) File "D:\comfyui-aik\ComfyUI-aki-v1.3\custom_nodes\ComfyUI-YoloWorld-EfficientSAM\utils\efficient_sam.py", line 28, in inference_with_box predicted_logits, predicted_iou = model( File "D:\comfyui-aik\ComfyUI-aki-v1.3\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "D:\comfyui-aik\ComfyUI-aki-v1.3\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(args, kwargs)

18458041181 commented 2 weeks ago

Error occurred when executing Yoloworld_ESAM_Zho:

The following operation failed in the TorchScript interpreter. Traceback of TorchScript, serialized code (most recent call last): File "code/torch/d2go/projects/sam/model/sam.py", line 34, in forward image_embeddings = image_embeddings0 else: image_embeddings1 = (self).get_image_embeddings(batched_images, )


image_embeddings = image_embeddings1
H = self.H
File "code/__torch__/d2go/projects/sam/model/sam.py", line 127, in get_image_embeddings
def get_image_embeddings(self: __torch__.d2go.projects.sam.model.sam.Sam,
batched_images: Tensor) -> List[Tensor]:
batched_images1 = (self).preprocess(batched_images, )
~~~~~~~~~~~~~~~~ <--- HERE
image_encoder = self.image_encoder
_59 = (image_encoder).forward(batched_images1, )
File "code/__torch__/d2go/projects/sam/model/sam.py", line 241, in preprocess
x0 = x
pixel_mean = self.pixel_mean
_94 = torch.sub(x0, pixel_mean)
~~~~~~~~~ <--- HERE
pixel_std = self.pixel_std
return torch.div(_94, pixel_std)

Traceback of TorchScript, original code (most recent call last):
File "/mnt/xarfuse/uid-462794/d7062d46-seed-7d5fdcd1-471c-4e32-b226-19878c868d15-ns-4026535078/d2go/projects/sam/model/sam.py", line 646, in forward
)
else:
image_embeddings = self.get_image_embeddings(batched_images)
~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE

return self.predict_masks(
File "/mnt/xarfuse/uid-462794/d7062d46-seed-7d5fdcd1-471c-4e32-b226-19878c868d15-ns-4026535078/d2go/projects/sam/model/sam.py", line 592, in get_image_embeddings
The last embedding corresponds to the final layer.
"""
batched_images = self.preprocess(batched_images)
~~~~~~~~~~~~~~~ <--- HERE
return self.image_encoder(batched_images)
File "/mnt/xarfuse/uid-462794/d7062d46-seed-7d5fdcd1-471c-4e32-b226-19878c868d15-ns-4026535078/d2go/projects/sam/model/sam.py", line 672, in preprocess
mode="bilinear",
)
return (x - self.pixel_mean) / self.pixel_std
~~~~~~~~~~~~~~~~~~~ <--- HERE
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!

File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-YoloWorld-EfficientSAM\YOLO_WORLD_EfficientSAM.py", line 149, in yoloworld_esam_image
detections.mask = inference_with_boxes(
^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-YoloWorld-EfficientSAM\utils\efficient_sam.py", line 59, in inference_with_boxes
mask = inference_with_box(image, box, model, device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-YoloWorld-EfficientSAM\utils\efficient_sam.py", line 28, in inference_with_box
predicted_logits, predicted_iou = model(
^^^^^^
File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
zhoumengchao commented 3 days ago

Error occurred when executing Yoloworld_ESAM_Zho:

The following operation failed in the TorchScript interpreter. Traceback of TorchScript (most recent call last): RuntimeError: vector::_M_range_check: __n (which is 18446744073709551615) >= this->size() (which is 3) 怎么解决这个问题呢

FruitPigFoot commented 3 days ago

@zhoumengchao i have the exact same error. You found any solutions?