Open Ak670676 opened 1 year ago
Have you activated oneAPI MKL env?
Hi, I got the same error "ImportError: libmkl_sycl.so.3: cannot open shared object file: No such file or directory".
I am using ubuntu and have a gpu of Intel Iris Xe Graphics.
I was able to reproduce this issue when patching torch inside the docker container https://hub.docker.com/layers/intel/intel-extension-for-pytorch/gpu/images/sha256-4d4b06040e9ee8ca4e5055142514b91506cf23880a630f8a912bb58ef61d016e?context=explore
The issue is no longer an issue in a newer image, https://hub.docker.com/layers/intel/intel-extension-for-pytorch/xpu-jupyter/images/sha256-fcf8e51efb2d0e62e01a90052eee8b45b72882be907953a82510b59b82b77cc6?context=explore. Inspecting the differences should help you resolve it on your end (or update your source container)
Encountered this issue as well. I guess it is incompatible with oneAPI base tookit v2024 which has no libmkl_sycl.so.3
ls /opt/intel/oneapi/mkl/2024.0/lib/
#print as below#
cmake libmkl_blacs_openmpi_ilp64.so libmkl_core.so libmkl_gnu_thread.so.2 libmkl_lapack95_lp64.a libmkl_scalapack_lp64.so libmkl_sycl_dft.so.4 libmkl_sycl_vm.so.4 intel64 libmkl_blacs_openmpi_ilp64.so.2 libmkl_core.so.2 libmkl_intel_ilp64.a libmkl_mc3.so.2 libmkl_scalapack_lp64.so.2 libmkl_sycl_lapack.so libmkl_tbb_thread.a libmkl_avx2.so.2 libmkl_blacs_openmpi_lp64.a libmkl_def.so.2 libmkl_intel_ilp64.so libmkl_pgi_thread.a libmkl_sequential.a libmkl_sycl_lapack.so.4 libmkl_tbb_thread.so libmkl_avx512.so.2 libmkl_blacs_openmpi_lp64.so libmkl_gf_ilp64.a libmkl_intel_ilp64.so.2 libmkl_pgi_thread.so libmkl_sequential.so libmkl_sycl_rng.so libmkl_tbb_thread.so.2 libmkl_blacs_intelmpi_ilp64.a libmkl_blacs_openmpi_lp64.so.2 libmkl_gf_ilp64.so libmkl_intel_lp64.a libmkl_pgi_thread.so.2 libmkl_sequential.so.2 libmkl_sycl_rng.so.4 libmkl_vml_avx2.so.2 libmkl_blacs_intelmpi_ilp64.so libmkl_blas95_ilp64.a libmkl_gf_ilp64.so.2 libmkl_intel_lp64.so libmkl_rt.so libmkl_sycl.a libmkl_sycl.so libmkl_vml_avx512.so.2 libmkl_blacs_intelmpi_ilp64.so.2 libmkl_blas95_lp64.a libmkl_gf_lp64.a libmkl_intel_lp64.so.2 libmkl_rt.so.2 libmkl_sycl_blas.so libmkl_sycl_sparse.so libmkl_vml_cmpt.so.2 libmkl_blacs_intelmpi_lp64.a libmkl_cdft_core.a libmkl_gf_lp64.so libmkl_intel_thread.a libmkl_scalapack_ilp64.a libmkl_sycl_blas.so.4 libmkl_sycl_sparse.so.4 libmkl_vml_def.so.2 libmkl_blacs_intelmpi_lp64.so libmkl_cdft_core.so libmkl_gf_lp64.so.2 libmkl_intel_thread.so libmkl_scalapack_ilp64.so libmkl_sycl_data_fitting.so libmkl_sycl_stats.so libmkl_vml_mc3.so.2 libmkl_blacs_intelmpi_lp64.so.2 libmkl_cdft_core.so.2 libmkl_gnu_thread.a libmkl_intel_thread.so.2 libmkl_scalapack_ilp64.so.2 libmkl_sycl_data_fitting.so.4 libmkl_sycl_stats.so.4 locale libmkl_blacs_openmpi_ilp64.a libmkl_core.a libmkl_gnu_thread.so libmkl_lapack95_ilp64.a libmkl_scalapack_lp64.a libmkl_sycl_dft.so libmkl_sycl_vm.so pkgconfig
You can currently try to install basekit 2023.2 from apt.
wget -O- https://apt.repos.intel.com/intel-gpg-keys/GPG-PUB-KEY-INTEL-SW-PRODUCTS.PUB | gpg --dearmor | tee /usr/share/keyrings/oneapi-archive-keyring.gpg > /dev/null
echo "deb [signed-by=/usr/share/keyrings/oneapi-archive-keyring.gpg] https://apt.repos.intel.com/oneapi all main" | tee /etc/apt/sources.list.d/oneAPI.list
apt update
apt install intel-oneapi-runtime-openmp=2023.2.2-47 intel-oneapi-runtime-dpcpp-cpp=2023.2.2-47 intel-oneapi-runtime-mkl=2023.2.0-49495
To save time for people encountering similar issues in late 2023, the environment variables can be set using export LD_LIBRARY_PATH=/opt/intel/oneapi/mkl/2023.2.0/lib/intel64:/opt/intel/oneapi/compiler/2023.2.0/linux/compiler/lib/intel64_lin:/opt/intel/oneapi/compiler/2023.2.0/linux/lib
after installing intel-basekit-2023.2.0
How does intel break their own packages by updating another one, you'd think the ipex team would talk with the 1api team. This is definitely an incompatibility with the new OneAPI BaseKit v2024
I have a fairly similar problem with libmkl_sycl_blas.so.4 which is kind of ironic because I am on intel cloud
Describe the issue
during import intel_extension_for_pytorch as ipex
7 │ │ ❱ 8 import intel_extension_for_pytorch as ipex │ │ 9 model = model.to('xpu') │ │ 10 data = data.to('xpu') │ │ 11 model = ipex.optimize(mod) │ │ │ │ /opt/conda/lib/python3.10/site-packages/intel_extension_for_pytorch/init.py:93 in │
│ │
│ 90 │ │
│ 91 │ kernel32.SetErrorMode(prev_error_mode) │
│ 92 │
│ ❱ 93 from .utils._proxy_module import * │
│ 94 from . import cpu │
│ 95 from . import xpu │
│ 96 from . import quantization │
│ │
│ /opt/conda/lib/python3.10/site-packages/intel_extension_for_pytorch/utils/_proxy_module.py:2 in │
│ │
│ │
│ 1 import torch │
│ ❱ 2 import intel_extension_for_pytorch._C