Open goodBOY-05 opened 4 months ago
i'm getting this error too.
Same error too.
So. I've been getting errors for a couple of weeks, and I've finally managed to get Yoloworld to work again. The workaround sucks, but seeing as very few issues are actually responded to on here, I suggest you follow my advice if you actually want to get this working.
When I first got these errors, I realised they were cuda related, so naturally, I switched the Esam loader device to "CPU", but that threw up errors too. After doing a bit of digging, I found this pull request that fixed the issue:
https://github.com/ZHO-ZHO-ZHO/ComfyUI-YoloWorld-EfficientSAM/pull/32
This was not merged into this branch, so ltdrdata made his own branch that actually works.
Enough preamble. This is how you fix this error.
Delete the Yoloworld custom node folder. Yes, you heard me. We are going to reinstall from scratch. Open a command prompt in your custom nodes folder.
git clone https://github.com/ltdrdata/ComfyUI-YoloWorld-EfficientSAM
(This is the fork that works)
cd custom_nodes/ComfyUI-YoloWorld-EfficientSAM
pip install -r requirements.txt
Now, download these files into your Yoloworld efficient Sam custom node directory:
https://huggingface.co/camenduru/YoloWorld-EfficientSAM/blob/main/efficient_sam_s_cpu.jit https://huggingface.co/camenduru/YoloWorld-EfficientSAM/blob/main/efficient_sam_s_gpu.jit
Restart comfy. Load your Yoloworld node like normal. Change the device from GPU to CPU, and it will work. GPU will probably still be busted, but I have no faith that this is going to be patched anytime soon. This way, you actually have something that works. Hope that helped.
So. I've been getting errors for a couple of weeks, and I've finally managed to get Yoloworld to work again. The workaround sucks, but seeing as very few issues are actually responded to on here, I suggest you follow my advice if you actually want to get this working.
When I first got these errors, I realised they were cuda related, so naturally, I switched the Esam loader device to "CPU", but that threw up errors too. After doing a bit of digging, I found this pull request that fixed the issue:
32
This was not merged into this branch, so ltdrdata made his own branch that actually works.
Enough preamble. This is how you fix this error.
Delete the Yoloworld custom node folder. Yes, you heard me. We are going to reinstall from scratch. Open a command prompt in your custom nodes folder.
git clone https://github.com/ltdrdata/ComfyUI-YoloWorld-EfficientSAM
(This is the fork that works)cd custom_nodes/ComfyUI-YoloWorld-EfficientSAM
pip install -r requirements.txt
Now, download these files into your Yoloworld efficient Sam custom node directory:
https://huggingface.co/camenduru/YoloWorld-EfficientSAM/blob/main/efficient_sam_s_cpu.jit https://huggingface.co/camenduru/YoloWorld-EfficientSAM/blob/main/efficient_sam_s_gpu.jit
Restart comfy. Load your Yoloworld node like normal. Change the device from GPU to CPU, and it will work. GPU will probably still be busted, but I have no faith that this is going to be patched anytime soon. This way, you actually have something that works. Hope that helped.
THAT fixed it... though now my wdtagger v3 broke. lol. it's cause of the onnxruntime change .... but this did fix the issue. THANKS!
Reinstall the building model file according to the process, but the error still persists.
Error occurred when executing Yoloworld_ESAM_Zho:
The following operation failed in the TorchScript interpreter. Traceback of TorchScript (most recent call last): RuntimeError: The following operation failed in the TorchScript interpreter. Traceback of TorchScript, serialized code (most recent call last): File "code/torch/d2go/projects/sam/model/sam.py", line 31, in _1 = torch.eq(fusion_type0, "hybrid") if _1: image_embeddings0 = (self).get_image_embeddings_with_early_fusion(batched_images, batched_points, batched_point_labels, )
image_embeddings = image_embeddings0
else:
File "code/__torch__/d2go/projects/sam/model/sam.py", line 135, in get_image_embeddings_with_early_fusion
batched_points: Tensor,
batched_point_labels: Tensor) -> List[Tensor]:
batched_images2 = (self).preprocess(batched_images, )
~~~~~~~~~~~~~~~~ <--- HERE
batch_size, _60, H, W, = torch.size(batched_images2)
max_num_queries = (torch.size(batched_points))[1]
File "code/__torch__/d2go/projects/sam/model/sam.py", line 241, in preprocess
x0 = x
pixel_mean = self.pixel_mean
_94 = torch.sub(x0, pixel_mean)
~~~~~~~~~ <--- HERE
pixel_std = self.pixel_std
return torch.div(_94, pixel_std)
Traceback of TorchScript, original code (most recent call last):
File "/mnt/xarfuse/uid-462794/44f9c4d0-seed-bf15d98d-c01d-4b13-b2d3-09941a262007-ns-4026536022/d2go/projects/sam/model/sam.py", line 642, in
batch_size, _, _, _ = batched_images.shape
if self.fusion_type == "early" or self.fusion_type == "hybrid":
image_embeddings = self.get_image_embeddings_with_early_fusion(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
batched_images, batched_points, batched_point_labels
)
File "/mnt/xarfuse/uid-462794/44f9c4d0-seed-bf15d98d-c01d-4b13-b2d3-09941a262007-ns-4026536022/d2go/projects/sam/model/sam.py", line 608, in get_image_embeddings_with_early_fusion
The last embedding corresponds to the final layer.
"""
batched_images = self.preprocess(batched_images)
~~~~~~~~~~~~~~~ <--- HERE
batch_size, _, H, W = batched_images.shape
max_num_queries = batched_points.shape[1]
File "/mnt/xarfuse/uid-462794/44f9c4d0-seed-bf15d98d-c01d-4b13-b2d3-09941a262007-ns-4026536022/d2go/projects/sam/model/sam.py", line 672, in preprocess
mode="bilinear",
)
return (x - self.pixel_mean) / self.pixel_std
~~~~~~~~~~~~~~~~~~~ <--- HERE
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
File "D:\ComfyUI_Mie_V2.0\ComfyUI_Mie_V2.0\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "D:\ComfyUI_Mie_V2.0\ComfyUI_Mie_V2.0\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "D:\ComfyUI_Mie_V2.0\ComfyUI_Mie_V2.0\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "D:\ComfyUI_Mie_V2.0\ComfyUI_Mie_V2.0\ComfyUI\custom_nodes\ComfyUI-YoloWorld-EfficientSAM\YOLO_WORLD_EfficientSAM.py", line 149, in yoloworld_esam_image
detections.mask = inference_with_boxes(
File "D:\ComfyUI_Mie_V2.0\ComfyUI_Mie_V2.0\ComfyUI\custom_nodes\ComfyUI-YoloWorld-EfficientSAM\utils\efficient_sam.py", line 59, in inference_with_boxes
mask = inference_with_box(image, box, model, device)
File "D:\ComfyUI_Mie_V2.0\ComfyUI_Mie_V2.0\ComfyUI\custom_nodes\ComfyUI-YoloWorld-EfficientSAM\utils\efficient_sam.py", line 28, in inference_with_box
predicted_logits, predicted_iou = model(
File "D:\ComfyUI_Mie_V2.0\ComfyUI_Mie_V2.0\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\ComfyUI_Mie_V2.0\ComfyUI_Mie_V2.0\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
关掉这个选项就会正常
关掉这个选项就会正常
但是mask还是不能输出,你的可以嘛?
关掉这个选项就会正常
但是mask还是不能输出,你的可以嘛?
关了后,一样报错
Try disable ComfyUI-Easy-Use
I also faced this problem I decided to disable all unused nodes. After rebooting Comfy, my workflow worked!
So. I've been getting errors for a couple of weeks, and I've finally managed to get Yoloworld to work again. The workaround sucks, but seeing as very few issues are actually responded to on here, I suggest you follow my advice if you actually want to get this working. When I first got these errors, I realised they were cuda related, so naturally, I switched the Esam loader device to "CPU", but that threw up errors too. After doing a bit of digging, I found this pull request that fixed the issue:
32
This was not merged into this branch, so ltdrdata made his own branch that actually works. Enough preamble. This is how you fix this error. Delete the Yoloworld custom node folder. Yes, you heard me. We are going to reinstall from scratch. Open a command prompt in your custom nodes folder.
git clone https://github.com/ltdrdata/ComfyUI-YoloWorld-EfficientSAM
(This is the fork that works)cd custom_nodes/ComfyUI-YoloWorld-EfficientSAM
pip install -r requirements.txt
Now, download these files into your Yoloworld efficient Sam custom node directory: https://huggingface.co/camenduru/YoloWorld-EfficientSAM/blob/main/efficient_sam_s_cpu.jit https://huggingface.co/camenduru/YoloWorld-EfficientSAM/blob/main/efficient_sam_s_gpu.jit Restart comfy. Load your Yoloworld node like normal. Change the device from GPU to CPU, and it will work. GPU will probably still be busted, but I have no faith that this is going to be patched anytime soon. This way, you actually have something that works. Hope that helped.THAT fixed it... though now my wdtagger v3 broke. lol. it's cause of the onnxruntime change .... but this did fix the issue. THANKS!
要求inference-gpu[yolo-world]==0.9.13,但是最新版已经0.20.1,如何处理才能继续保证导出正常使用,谢谢
got prompt [rgthree] Using rgthree's optimized recursive execution. !!! Exception during processing!!! The following operation failed in the TorchScript interpreter. Traceback of TorchScript (most recent call last): RuntimeError: invalid vector subscript
Traceback (most recent call last): File "D:\ComfyUI_Mie_V2.0\ComfyUI_Mie_V2.0\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "D:\ComfyUI_Mie_V2.0\ComfyUI_Mie_V2.0\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "D:\ComfyUI_Mie_V2.0\ComfyUI_Mie_V2.0\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(slice_dict(input_data_all, i))) File "D:\ComfyUI_Mie_V2.0\ComfyUI_Mie_V2.0\ComfyUI\custom_nodes\ComfyUI-YoloWorld-EfficientSAM\YOLO_WORLD_EfficientSAM.py", line 149, in yoloworld_esam_image detections.mask = inference_with_boxes( File "D:\ComfyUI_Mie_V2.0\ComfyUI_Mie_V2.0\ComfyUI\custom_nodes\ComfyUI-YoloWorld-EfficientSAM\utils\efficient_sam.py", line 59, in inference_with_boxes mask = inference_with_box(image, box, model, device) File "D:\ComfyUI_Mie_V2.0\ComfyUI_Mie_V2.0\ComfyUI\custom_nodes\ComfyUI-YoloWorld-EfficientSAM\utils\efficient_sam.py", line 28, in inference_with_box predicted_logits, predicted_iou = model( File "D:\ComfyUI_Mie_V2.0\ComfyUI_Mie_V2.0\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "D:\ComfyUI_Mie_V2.0\ComfyUI_Mie_V2.0\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(args, kwargs) RuntimeError: The following operation failed in the TorchScript interpreter. Traceback of TorchScript (most recent call last): RuntimeError: invalid vector subscript