cubiq / ComfyUI_IPAdapter_plus

GNU General Public License v3.0
3.12k stars 235 forks source link

Could not run 'aten::_upsample_bicubic2d_aa.out' with arguments from the 'XPU' backend. #604

Closed Whackjob closed 1 week ago

Whackjob commented 1 week ago

Error occurred when executing IPAdapter:

Could not run 'aten::_upsample_bicubic2d_aa.out' with arguments from the 'XPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_upsample_bicubic2d_aa.out' is only available for these backends: [CPU, Meta, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradMeta, AutogradNestedTensor, Tracer, AutocastCPU, AutocastXPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].

CPU: registered at /build/pytorch/build/aten/src/ATen/RegisterCPU.cpp:31188 [kernel] Meta: registered at /build/pytorch/build/aten/src/ATen/RegisterMeta.cpp:26829 [kernel] BackendSelect: fallthrough registered at /build/pytorch/aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback] Python: registered at /build/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:153 [backend fallback] FuncTorchDynamicLayerBackMode: registered at /build/pytorch/aten/src/ATen/functorch/DynamicLayer.cpp:498 [backend fallback] Functionalize: registered at /build/pytorch/build/aten/src/ATen/RegisterFunctionalization_0.cpp:21905 [kernel] Named: registered at /build/pytorch/aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback] Conjugate: registered at /build/pytorch/aten/src/ATen/ConjugateFallback.cpp:17 [backend fallback] Negative: registered at /build/pytorch/aten/src/ATen/native/NegateFallback.cpp:19 [backend fallback] ZeroTensor: registered at /build/pytorch/aten/src/ATen/ZeroTensorFallback.cpp:86 [backend fallback] ADInplaceOrView: registered at /build/pytorch/torch/csrc/autograd/generated/ADInplaceOrViewType_0.cpp:4733 [kernel] AutogradOther: registered at /build/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:18610 [autograd kernel] AutogradCPU: registered at /build/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:18610 [autograd kernel] AutogradCUDA: registered at /build/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:18610 [autograd kernel] AutogradHIP: registered at /build/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:18610 [autograd kernel] AutogradXLA: registered at /build/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:18610 [autograd kernel] AutogradMPS: registered at /build/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:18610 [autograd kernel] AutogradIPU: registered at /build/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:18610 [autograd kernel] AutogradXPU: registered at /build/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:18610 [autograd kernel] AutogradHPU: registered at /build/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:18610 [autograd kernel] AutogradVE: registered at /build/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:18610 [autograd kernel] AutogradLazy: registered at /build/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:18610 [autograd kernel] AutogradMTIA: registered at /build/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:18610 [autograd kernel] AutogradPrivateUse1: registered at /build/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:18610 [autograd kernel] AutogradPrivateUse2: registered at /build/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:18610 [autograd kernel] AutogradPrivateUse3: registered at /build/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:18610 [autograd kernel] AutogradMeta: registered at /build/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:18610 [autograd kernel] AutogradNestedTensor: registered at /build/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:18610 [autograd kernel] Tracer: registered at /build/pytorch/torch/csrc/autograd/generated/TraceType_0.cpp:16725 [kernel] AutocastCPU: fallthrough registered at /build/pytorch/aten/src/ATen/autocast_mode.cpp:382 [backend fallback] AutocastXPU: fallthrough registered at /build/intel-pytorch-extension/csrc/gpu/aten/amp/autocast_mode.cpp:45 [backend fallback] AutocastCUDA: fallthrough registered at /build/pytorch/aten/src/ATen/autocast_mode.cpp:249 [backend fallback] FuncTorchBatched: registered at /build/pytorch/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:710 [backend fallback] FuncTorchVmapMode: fallthrough registered at /build/pytorch/aten/src/ATen/functorch/VmapModeRegistrations.cpp:28 [backend fallback] Batched: registered at /build/pytorch/aten/src/ATen/LegacyBatchingRegistrations.cpp:1075 [backend fallback] VmapMode: fallthrough registered at /build/pytorch/aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback] FuncTorchGradWrapper: registered at /build/pytorch/aten/src/ATen/functorch/TensorWrapper.cpp:203 [backend fallback] PythonTLSSnapshot: registered at /build/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:161 [backend fallback] FuncTorchDynamicLayerFrontMode: registered at /build/pytorch/aten/src/ATen/functorch/DynamicLayer.cpp:494 [backend fallback] PreDispatch: registered at /build/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:165 [backend fallback] PythonDispatcher: registered at /build/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:157 [backend fallback]

File "/media/whackjob/16Tons/AI/ComfyUI/execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "/media/whackjob/16Tons/AI/ComfyUI/execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "/media/whackjob/16Tons/AI/ComfyUI/execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(slice_dict(input_data_all, i))) File "/media/whackjob/16Tons/AI/ComfyUI/custom_nodes/ComfyUI_IPAdapter_plus/IPAdapterPlus.py", line 667, in apply_ipadapter return ipadapter_execute(model.clone(), ipadapter['ipadapter']['model'], ipadapter['clipvision']['model'], ipa_args) File "/media/whackjob/16Tons/AI/ComfyUI/custom_nodes/ComfyUI_IPAdapter_plus/IPAdapterPlus.py", line 309, in ipadapter_execute img_cond_embeds = encode_image_masked(clipvision, image, batch_size=encode_batch_size) File "/media/whackjob/16Tons/AI/ComfyUI/custom_nodes/ComfyUI_IPAdapter_plus/utils.py", line 171, in encode_image_masked pixel_values = clip_preprocess(img.to(clip_vision.load_device)).float() File "/media/whackjob/16Tons/AI/ComfyUI/comfy/clip_vision.py", line 25, in clip_preprocess image = torch.nn.functional.interpolate(image, size=(round(scale image.shape[2]), round(scale image.shape[3])), mode="bicubic", antialias=True) File "/media/whackjob/16Tons/AI/ComfyUI/venv/lib/python3.10/site-packages/torch/nn/functional.py", line 4027, in interpolate return torch._C._nn._upsample_bicubic2d_aa(input, output_size, align_corners, scale_factors) Queue size: 0 Extra options

Whackjob commented 1 week ago

Ah, let's see. Using linux mint, intel arc 770 card with the 16GB vram. On a previous install, was able to run it fine.

cubiq commented 1 week ago

this is not a problem with ipadapter but with comfyui. XPU doesn't support bicubic sampling that is used in the clip vision encoder

Whackjob commented 1 week ago

I do not get this error with anything else. I can generate fine without the ipadapter nodes. What from ComfyIU could he the issue?

On Thu, Jun 20, 2024, 1:41 AM Matteo Spinelli @.***> wrote:

this is not a problem with ipadapter but with comfyui. XPU doesn't support bicubic sampling that is used in the clip vision encoder

— Reply to this email directly, view it on GitHub https://github.com/cubiq/ComfyUI_IPAdapter_plus/issues/604#issuecomment-2179855311, or unsubscribe https://github.com/notifications/unsubscribe-auth/AQWT4DNIRDNVXQMJ5EAQKQDZIJTQJAVCNFSM6AAAAABJTCTBHGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCNZZHA2TKMZRGE . You are receiving this because you authored the thread.Message ID: @.***>

cubiq commented 1 week ago

ipadapter is one of the few nodes that actually uses the clipvision, that's probably why you haven't encountered this issue before

the issue is that your arc doesn't support bicubic. if you replace "bicubic" with "linear" on line 25 of the /media/whackjob/16Tons/AI/ComfyUI/comfy/clip_vision.py file it might work

Whackjob commented 1 week ago

Cheers, friend. When I de-office, I'll give it a shot.

On Thu, Jun 20, 2024, 9:09 AM Matteo Spinelli @.***> wrote:

ipadapter is one of the few nodes that actually uses the clipvision, that's probably why you haven't encountered this issue before

the issue is that your arc doesn't support bicubic. if you replace "bicubic" with "linear" on line 25 of the /media/whackjob/16Tons/AI/ComfyUI/comfy/clip_vision.py file it might work

— Reply to this email directly, view it on GitHub https://github.com/cubiq/ComfyUI_IPAdapter_plus/issues/604#issuecomment-2180642925, or unsubscribe https://github.com/notifications/unsubscribe-auth/AQWT4DKRQ2H52PRQNPEWNWLZILH6XAVCNFSM6AAAAABJTCTBHGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCOBQGY2DEOJSGU . You are receiving this because you authored the thread.Message ID: @.***>