comfyanonymous / ComfyUI

The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.
https://www.comfy.org/
GNU General Public License v3.0
50.84k stars 5.34k forks source link

undefined symbol: iJIT_NotifyEvent #3513

Closed Whackjob closed 4 months ago

Whackjob commented 4 months ago

It's probably something stupid and easy to fix, but I've had zero success googling and troubleshooting this error myself. I'm just at a loss. I've tried to redo this several times without luck.

(venv) whackjob@WhackjobONE:/media/whackjob/16Tons/AI/ComfyUI$ ipexrun main.py --use-pytorch-cross-attention --highvram Traceback (most recent call last): File "/media/whackjob/16Tons/AI/ComfyUI/venv/bin/ipexrun", line 5, in from intel_extension_for_pytorch.launcher import main File "/media/whackjob/16Tons/AI/ComfyUI/venv/lib/python3.10/site-packages/intel_extension_for_pytorch/init.py", line 3, in import torch File "/media/whackjob/16Tons/AI/ComfyUI/venv/lib/python3.10/site-packages/torch/init.py", line 229, in from torch._C import * # noqa: F403 ImportError: /media/whackjob/16Tons/AI/ComfyUI/venv/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so: undefined symbol: iJIT_NotifyEvent

KiwiHana commented 1 month ago

@Whackjob Hi, Does it wok?

source venv/bin/activate
source /opt/intel/oneapi/setvars.sh
python -c "import intel_extension_for_pytorch"
Whackjob commented 1 month ago

Regrettably, that import gives the error also. I've been googling, trying to find out how to fix that undefined symbol.

(venv) @.**:/media/whackjob/16Tons/AI/ComfyUI$ python -c "import intel_extension_for_pytorch" Traceback (most recent call last): File "", line 1, in File "/media/whackjob/16Tons/AI/ComfyUI/venv/lib/python3.10/site-packages/intel_extension_for_pytorch/init.py", line 95, in from .utils._proxy_module import File "/media/whackjob/16Tons/AI/ComfyUI/venv/lib/python3.10/site-packages/intel_extension_for_pytorch/utils/_proxy_module.py", line 2, in import intel_extension_for_pytorch._C ImportError: /opt/intel/oneapi/mkl/2024.2/lib/libmkl_sycl_data_fitting.so.4: undefined symbol: _ZN4sycl3_V17handler22setKernelIsCooperativeEb

On Mon, Jul 22, 2024 at 10:41 PM KiwiHana @.***> wrote:

@Whackjob https://github.com/Whackjob Hi, Does it wok?

source venv/bin/activate source /opt/intel/oneapi/setvars.sh python -c "import intel_extension_for_pytorch"

— Reply to this email directly, view it on GitHub https://github.com/comfyanonymous/ComfyUI/issues/3513#issuecomment-2244141904, or unsubscribe https://github.com/notifications/unsubscribe-auth/AQWT4DI4M63AHFU46LCOI4DZNW7EFAVCNFSM6AAAAABH5G4EYGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENBUGE2DCOJQGQ . You are receiving this because you were mentioned.Message ID: @.***>

KiwiHana commented 1 month ago

@Whackjob Need level_zero like:

$ source /opt/intel/oneapi/mkl/2024.2/env/vars.sh
$ source /opt/intel/oneapi/compiler/2024.2/env/vars.sh
$ sycl-ls
[opencl:gpu:0] Intel(R) OpenCL Graphics, Intel(R) Arc(TM) A750 Graphics OpenCL 3.0 NEO  [23.22.26516.34]
[opencl:gpu:1] Intel(R) OpenCL Graphics, Intel(R) UHD Graphics 770 OpenCL 3.0 NEO  [23.22.26516.34]
[ext_oneapi_level_zero:gpu:0] Intel(R) Level-Zero, Intel(R) Arc(TM) A750 Graphics 1.3 [1.3.26516]
[ext_oneapi_level_zero:gpu:1] Intel(R) Level-Zero, Intel(R) UHD Graphics 770 1.3 [1.3.26516]

if choose the first device, use $ export ONEAPI_DEVICE_SELECTOR=level_zero:0

Whackjob commented 1 month ago

It seems like some progress. I at least got into ComfyUI. When I went to run something, it ended up failing at the VAE decode node.

(venv) @.:/media/whackjob/16Tons/AI/ComfyUI$ source /opt/intel/oneapi/mkl/2024.2/env/vars.sh (venv) @.:/media/whackjob/16Tons/AI/ComfyUI$ source /opt/intel/oneapi/compiler/2024.2/env/vars.sh (venv) @.:/media/whackjob/16Tons/AI/ComfyUI$ sycl-ls [opencl:cpu][opencl:0] Intel(R) OpenCL, AMD Ryzen 5 3600 6-Core Processor OpenCL 3.0 (Build 0) [2024.18.6.0.02_160000] [opencl:gpu][opencl:1] Intel(R) OpenCL Graphics, Intel(R) Arc(TM) A770 Graphics OpenCL 3.0 NEO [24.22.29735.27] [opencl:fpga][opencl:2] Intel(R) FPGA Emulation Platform for OpenCL(TM), Intel(R) FPGA Emulation Device OpenCL 1.2 [2024.18.6.0.02_160000] [opencl:fpga][opencl:3] Intel(R) FPGA Emulation Platform for OpenCL(TM), Intel(R) FPGA Emulation Device OpenCL 1.2 [2024.17.5.0.08_160000.xmain-hotfix] [opencl:cpu][opencl:4] Intel(R) OpenCL, AMD Ryzen 5 3600 6-Core Processor OpenCL 3.0 (Build 0) [2024.18.6.0.02_160000] [level_zero:gpu][level_zero:0] Intel(R) Level-Zero, Intel(R) Arc(TM) A770 Graphics 1.3 [1.3.29735] (venv) @.:/media/whackjob/16Tons/AI/ComfyUI$ export ONEAPI_DEVICE_SELECTOR=level_zero:0 (venv) @.**:/media/whackjob/16Tons/AI/ComfyUI$ ipexrun main.py --use-pytorch-cross-attention --highvram* /media/whackjob/16Tons/AI/ComfyUI/venv/lib/python3.10/site-packages/intel_extension_for_pytorch/launcher.py:102: UserWarning: Backend is not specified, it will automatically default to cpu. warnings.warn( 2024-07-23 02:00:26,940 - intel_extension_for_pytorch.cpu.launch.launch - WARNING - Neither of ['tcmalloc', 'jemalloc'] memory allocator is found in ['/media/whackjob/16Tons/AI/ComfyUI/venv/lib/', '/home/whackjob/.local/lib/', '/usr/local/lib/', '/usr/local/lib64/', '/usr/lib/', '/usr/lib64/', '/usr/lib/x86_64-linux-gnu/']. 2024-07-23 02:00:26,940 - intel_extension_for_pytorch.cpu.launch.launch - INFO - Use 'default' memory allocator. This may drop the performance. 2024-07-23 02:00:26,940 - intel_extension_for_pytorch.cpu.launch.launch - WARNING - 'intel' OpenMP runtime is not found in ['/media/whackjob/16Tons/AI/ComfyUI/venv/lib/', '/home/whackjob/.local/lib/', '/usr/local/lib/', '/usr/local/lib64/', '/usr/lib/', '/usr/lib64/', '/usr/lib/x86_64-linux-gnu/']. 2024-07-23 02:00:26,940 - intel_extension_for_pytorch.cpu.launch.launch - INFO - Use 'default' OpenMP runtime. 2024-07-23 02:00:26,944 - intel_extension_for_pytorch.cpu.launch.launch - INFO - Use 'auto' => 'taskset' multi-task manager. 2024-07-23 02:00:26,944 - intel_extension_for_pytorch.cpu.launch.launch - INFO - env: Untouched preset environment variables are not displayed. 2024-07-23 02:00:26,944 - intel_extension_for_pytorch.cpu.launch.launch - INFO - env: OMP_SCHEDULE=STATIC 2024-07-23 02:00:26,944 - intel_extension_for_pytorch.cpu.launch.launch - INFO - env: OMP_PROC_BIND=CLOSE 2024-07-23 02:00:26,944 - intel_extension_for_pytorch.cpu.launch.launch - INFO - env: OMP_NUM_THREADS=6 2024-07-23 02:00:26,944 - intel_extension_for_pytorch.cpu.launch.launch - INFO - cmd: taskset -c 0-5 /media/whackjob/16Tons/AI/ComfyUI/venv/bin/python3 -u main.py --use-pytorch-cross-attention --highvram [START] Security scan [DONE] Security scan

ComfyUI-Manager: installing dependencies done.

ComfyUI startup time: 2024-07-23 02:00:27.364289 Platform: Linux Python version: 3.10.12 (main, Mar 22 2024, 16:50:05) [GCC 11.4.0] Python executable: /media/whackjob/16Tons/AI/ComfyUI/venv/bin/python3 ComfyUI Path: /media/whackjob/16Tons/AI/ComfyUI Log path: /media/whackjob/16Tons/AI/ComfyUI/comfyui.log

Prestartup times for custom nodes: 0.4 seconds: /media/whackjob/16Tons/AI/ComfyUI/custom_nodes/ComfyUI-Manager

Total VRAM 15474 MB, total RAM 128726 MB pytorch version: 2.1.0.post2+cxx11.abi Set vram state to: HIGH_VRAM Device: xpu Using pytorch cross attention [Prompt Server] web root: /media/whackjob/16Tons/AI/ComfyUI/web Adding extra search path checkpoints /media/whackjob/16Tons/models/checkpoints Adding extra search path configs /media/whackjob/16Tons/models/cheeckpoints Adding extra search path vae /media/whackjob/16Tons/models/vae Adding extra search path loras /media/whackjob/16Tons/models/loras Adding extra search path loras /media/whackjob/16Tons/models/lycoris Adding extra search path upscale_models /media/whackjob/16Tons/models/ESRGAN Adding extra search path upscale_models /media/whackjob/16Tons/models/RealESRGAN Adding extra search path upscale_models /media/whackjob/16Tons/models/SwinIR Adding extra search path embeddings /media/whackjob/16Tons/embeddings Adding extra search path hypernetworks /media/whackjob/16Tons/models/hypernetworks Adding extra search path controlnet /media/whackjob/16Tons/models/controlnet

Loading: ComfyUI-Manager (V2.44.1)

ComfyUI Revision: 2375 [1cde6b2e] | Released on '2024-07-16'

Import times for custom nodes: 0.0 seconds: /media/whackjob/16Tons/AI/ComfyUI/custom_nodes/websocket_image_save.py 0.0 seconds: /media/whackjob/16Tons/AI/ComfyUI/custom_nodes/ComfyUI-IPAnimate 0.1 seconds: /media/whackjob/16Tons/AI/ComfyUI/custom_nodes/ComfyUI_IPAdapter_plus 0.5 seconds: /media/whackjob/16Tons/AI/ComfyUI/custom_nodes/ComfyUI-Manager

Starting server

To see the GUI go to: http://127.0.0.1:8188 [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json FETCH DATA from: /media/whackjob/16Tons/AI/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json [DONE] got prompt model_type EPS Using pytorch attention in VAE Using pytorch attention in VAE loaded straight to GPU Requested to load BaseModel Loading 1 new model /media/whackjob/16Tons/AI/ComfyUI/venv/lib/python3.10/site-packages/intel_extension_for_pytorch/frontend.py:465: UserWarning: Conv BatchNorm folding failed during the optimize process. warnings.warn( /media/whackjob/16Tons/AI/ComfyUI/venv/lib/python3.10/site-packages/intel_extension_for_pytorch/frontend.py:472: UserWarning: Linear BatchNorm folding failed during the optimize process. warnings.warn( Requested to load SD1ClipModel Loading 1 new model 100%|███████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:09<00:00, 2.08it/s] Requested to load AutoencoderKL Loading 1 new model /media/whackjob/16Tons/AI/ComfyUI/venv/lib/python3.10/site-packages/torch/overrides.py:110: UserWarning: 'has_cuda' is deprecated, please use 'torch.backends.cuda.is_built()' torch.has_cuda, /media/whackjob/16Tons/AI/ComfyUI/venv/lib/python3.10/site-packages/torch/overrides.py:111: UserWarning: 'has_cudnn' is deprecated, please use 'torch.backends.cudnn.is_available()' torch.has_cudnn, /media/whackjob/16Tons/AI/ComfyUI/venv/lib/python3.10/site-packages/torch/overrides.py:117: UserWarning: 'has_mps' is deprecated, please use 'torch.backends.mps.is_built()' torch.has_mps, /media/whackjob/16Tons/AI/ComfyUI/venv/lib/python3.10/site-packages/torch/overrides.py:118: UserWarning: 'has_mkldnn' is deprecated, please use 'torch.backends.mkldnn.is_available()' torch.has_mkldnn,

On Tue, Jul 23, 2024 at 1:40 AM KiwiHana @.***> wrote:

@Whackjob https://github.com/Whackjob Need level_zero like:

$ source /opt/intel/oneapi/mkl/2024.2/env/vars.sh $ source /opt/intel/oneapi/compiler/2024.2/env/vars.sh $ sycl-ls [opencl:gpu:0] Intel(R) OpenCL Graphics, Intel(R) Arc(TM) A750 Graphics OpenCL 3.0 NEO [23.22.26516.34] [opencl:gpu:1] Intel(R) OpenCL Graphics, Intel(R) UHD Graphics 770 OpenCL 3.0 NEO [23.22.26516.34] [ext_oneapi_level_zero:gpu:0] Intel(R) Level-Zero, Intel(R) Arc(TM) A750 Graphics 1.3 [1.3.26516] [ext_oneapi_level_zero:gpu:1] Intel(R) Level-Zero, Intel(R) UHD Graphics 770 1.3 [1.3.26516]

if choose the first device, use $ export ONEAPI_DEVICE_SELECTOR=level_zero:0

— Reply to this email directly, view it on GitHub https://github.com/comfyanonymous/ComfyUI/issues/3513#issuecomment-2244295491, or unsubscribe https://github.com/notifications/unsubscribe-auth/AQWT4DJW4UELB33MQQHYEWLZNXUELAVCNFSM6AAAAABH5G4EYGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENBUGI4TKNBZGE . You are receiving this because you were mentioned.Message ID: @.***>