intel / intel-extension-for-pytorch

A Python package for extending the official PyTorch that can easily obtain performance on Intel platform
Apache License 2.0
1.64k stars 253 forks source link

ERROR: Could not find a version that satisfies the requirement torch==2.0.1a0 (from versions: 1.11.0, 1.12.0, 1.12.1, 1.13.0, 1.13.1, 2.0.0, 2.0.1) #412

Open sdn3rd opened 1 year ago

sdn3rd commented 1 year ago

Describe the bug

When entering my SD venv and trying to install the appropriate IPEX supported modules I get this error.

python -m pip install torch==2.0.1a0 torchvision==0.15.2a0 intel_extension_for_pytorch==2.0.110+xpu -f https://developer.intel.com/ipex-whl-stable-xpu

Running this in my venv, have uninstalled all modules and purged cache before trying.

Versions

Python 3.10.12

pip list Package Version


absl-py 1.4.0 accelerate 0.20.3 addict 2.4.0 aenum 3.1.15 aiofiles 23.2.1 aiohttp 3.8.5 aiosignal 1.3.1 altair 5.0.1 annotated-types 0.5.0 antlr4-python3-runtime 4.9.3 anyio 3.7.1 appdirs 1.4.4 astunparse 1.6.3 async-timeout 4.0.3 attrs 23.1.0 basicsr 1.4.2 beautifulsoup4 4.12.2 blendmodes 2022 boltons 23.0.0 cachetools 5.3.1 certifi 2023.7.22 cffi 1.15.1 charset-normalizer 3.2.0 clean-fid 0.1.35 click 8.1.7 clip 1.0 clip-interrogator 0.6.0 cmake 3.27.2 colorama 0.4.6 coloredlogs 15.0.1 colorlog 6.7.0 compel 2.0.2 contourpy 1.1.0 convcolors 2.2.0 cssselect2 0.7.0 cycler 0.11.0 dadaptation 3.1 deprecation 2.1.0 diffusers 0.20.0 discord-webhook 1.1.0 easydev 0.12.1 einops 0.4.1 exceptiongroup 1.1.3 extcolors 1.0.0 facexlib 0.3.0 fastapi 0.94.1 fasteners 0.18 ffmpy 0.3.1 filelock 3.12.2 filetype 1.2.0 filterpy 1.4.5 flatbuffers 23.5.26 fonttools 4.42.1 frozenlist 1.4.0 fsspec 2023.6.0 ftfy 6.1.1 future 0.18.3 fvcore 0.1.5.post20221221 gast 0.4.0 gdown 4.7.1 gfpgan 1.3.8 gitdb 4.0.10 GitPython 3.1.32 google-auth 2.22.0 google-auth-oauthlib 1.0.0 google-pasta 0.2.0 gradio 3.41.2 gradio_client 0.5.0 greenlet 2.0.2 grpcio 1.57.0 h11 0.14.0 h5py 3.9.0 httpcore 0.17.3 httpx 0.24.1 huggingface-hub 0.16.4 humanfriendly 10.0 idna 3.4 imageio 2.31.2 importlib-metadata 6.8.0 importlib-resources 6.0.1 inflection 0.5.1 intel-extension-for-tensorflow 2.13.0.0 intel-extension-for-tensorflow-lib 2.13.0.0.1 invisible-watermark 0.1.5 iopath 0.1.9 Jinja2 3.1.2 joblib 1.3.2 jsonmerge 1.9.2 jsonschema 4.19.0 jsonschema-specifications 2023.7.1 keras 2.13.1 kiwisolver 1.4.5 kornia 0.7.0 lark 1.1.7 lazy_loader 0.3 libclang 16.0.6 lightning-utilities 0.9.0 lion-pytorch 0.1.2 lit 16.0.6 llvmlite 0.40.1 lmdb 1.4.1 lpips 0.1.4 lxml 4.9.3 Markdown 3.4.4 markdown-it-py 3.0.0 MarkupSafe 2.1.3 matplotlib 3.7.2 mdurl 0.1.2 mediapipe 0.10.3 mpmath 1.3.0 multidict 6.0.4 networkx 3.1 numba 0.57.1 numexpr 2.8.4 numpy 1.24.4 nvidia-cublas-cu11 11.10.3.66 nvidia-cuda-cupti-cu11 11.7.101 nvidia-cuda-nvrtc-cu11 11.7.99 nvidia-cuda-runtime-cu11 11.7.99 nvidia-cudnn-cu11 8.5.0.96 nvidia-cufft-cu11 10.9.0.58 nvidia-curand-cu11 10.2.10.91 nvidia-cusolver-cu11 11.4.0.1 nvidia-cusparse-cu11 11.7.4.91 nvidia-nccl-cu11 2.14.3 nvidia-nvtx-cu11 11.7.91 oauthlib 3.2.2 omegaconf 2.3.0 onnxruntime 1.15.1 open-clip-torch 2.20.0 opencv-contrib-python 4.8.0.76 opencv-contrib-python-headless 4.8.0.76 opencv-python 4.8.0.76 opencv-python-headless 4.7.0.72 opt-einsum 3.3.0 orjson 3.9.5 packaging 23.1 pandas 1.5.3 pexpect 4.8.0 pi-heif 0.13.0 piexif 1.1.3 Pillow 9.5.0 pip 22.0.2 platformdirs 3.10.0 pooch 1.7.0 portalocker 2.7.0 protobuf 3.20.3 psutil 5.9.5 ptyprocess 0.7.0 py-cpuinfo 9.0.0 pyasn1 0.5.0 pyasn1-modules 0.3.0 pycparser 2.21 pydantic 1.10.11 pydantic_core 2.6.3 pydub 0.25.1 Pygments 2.16.1 PyMatting 1.1.8 pyparsing 3.0.9 PySocks 1.7.1 python-dateutil 2.8.2 python-multipart 0.0.6 pytorch-lightning 1.9.4 pytz 2023.3 PyWavelets 1.4.1 PyYAML 6.0.1 realesrgan 0.3.0 referencing 0.30.2 regex 2023.8.8 rembg 2.0.38 reportlab 4.0.4 requests 2.31.0 requests-oauthlib 1.3.1 resize-right 0.0.2 rich 13.5.2 rpds-py 0.9.2 rsa 4.9 safetensors 0.3.3 scikit-image 0.21.0 scikit-learn 1.3.0 scipy 1.11.2 seaborn 0.12.2 segment-anything 1.0 semantic-version 2.10.0 Send2Trash 1.8.2 sentencepiece 0.1.99 setuptools 59.6.0 six 1.16.0 smmap 5.0.0 sniffio 1.3.0 sounddevice 0.4.6 soupsieve 2.4.1 SQLAlchemy 2.0.20 starlette 0.26.1 supervision 0.13.0 svglib 1.5.1 sympy 1.12 tabulate 0.9.0 tb-nightly 2.15.0a20230827 tensorboard 2.13.0 tensorboard-data-server 0.7.1 tensorflow 2.13.0 tensorflow-estimator 2.13.0 tensorflow-io-gcs-filesystem 0.33.0 termcolor 2.3.0 threadpoolctl 3.2.0 tifffile 2023.8.25 timm 0.6.13 tinycss2 1.2.1 tokenizers 0.13.3 tomesd 0.1.3 toml 0.10.2 tomli 2.0.1 toolz 0.12.0 torchdiffeq 0.2.3 torchmetrics 1.1.0 torchsde 0.2.5 tqdm 4.65.0 trampoline 0.1.2 transformers 4.31.0 triton 2.0.0 typing_extensions 4.7.1 tzdata 2023.3 ultralytics 8.0.163 urllib3 1.26.15 uvicorn 0.23.2 voluptuous 0.13.1 wcwidth 0.2.6 webencodings 0.5.1 websockets 11.0.3 Werkzeug 2.3.7 wheel 0.41.2 wrapt 1.15.0 yacs 0.1.8 yapf 0.40.1 yarl 1.9.2 zipp 3.16.2

jingxu10 commented 1 year ago

Is your OS Windows or Linux/WSL2?

mihiris-here commented 1 year ago

I got the same message. I did not, however, remove libraries that OP has written. My OS is Windows 11.

martinmCGG commented 1 year ago

I also experienced this (~30 minutes ago) on Linux at Intel Developer Cloud, but the same command works on my PC. Adding -v -v -v to pip switches may provide additional info for debugging, e.g. in my case the relevant part says:

Looking in links: https://developer.intel.com/ipex-whl-stable-xpu
2 location(s) to search for versions of torch:
* https://developer.intel.com/ipex-whl-stable-xpu
* https://pypi.org/simple/torch/
Fetching project page and analyzing links: https://developer.intel.com/ipex-whl-stable-xpu
Getting page https://developer.intel.com/ipex-whl-stable-xpu
Looking up "https://developer.intel.com/ipex-whl-stable-xpu" in the cache
Request header has "max_age" as 0, cache bypassed
Starting new HTTPS connection (1): developer.intel.com:443
https://developer.intel.com:443 "GET /ipex-whl-stable-xpu HTTP/1.1" 301 0
Updating cache with response from "https://developer.intel.com/ipex-whl-stable-xpu"
Caching permanent redirect
Looking up "https://corpredirect.intel.com/Redirector/404Redirector.aspx?404;https://developer.intel.com/ipex-whl-stable-xpu" in the cache
Request header has "max_age" as 0, cache bypassed
Starting new HTTPS connection (1): corpredirect.intel.com:443
https://corpredirect.intel.com:443 "GET /Redirector/404Redirector.aspx?404;https://developer.intel.com/ipex-whl-stable-xpu HTTP/1.1" 403 314
Status code 403 not in (200, 203, 300, 301, 308)
Could not fetch URL https://developer.intel.com/ipex-whl-stable-xpu: 403 Client Error: Forbidden for url: https://corpredirect.intel.com/Redirector/404Redirector.aspx?404;https://developer.intel.com/ipex-whl-stable-xpu - skipping

Workaround:

  1. open the https://developer.intel.com/ipex-whl-stable-xpu URL in a web browser, on a different device/connection, with a VPN etc.
  2. if you get through the redirections and see the list of .whl URLs, copy the final URL back to PIP. If the final URL starts with http:// (which is not great security-wise), you will also need to add --trusted-host [hostname from the final URL]. In my case the command was python -m pip install torch==2.0.1a0 torchvision==0.15.2a0 intel_extension_for_pytorch==2.0.110+xpu -f 'http://ec2-52-27-27-201.us-west-2.compute.amazonaws.com/ipex-release.php?device=xpu&repo=us&release=stable' --trusted-host ec2-52-27-27-201.us-west-2.compute.amazonaws.com

The root cause for me seems to be at corpredirect.intel.com. It uses multiple IP addresses (for different regions?): 104.64.160.28 works (redirects to the AWS URL), 2.19.139.15 does not (403 Forbidden). The logs are attached (requesting the same URL with cURL, from IDC and my PC): NOK - IDC.txt OK - my desktop.txt

daniellee1011 commented 1 year ago

$ python -m pip install torch==1.10.0a0 -f https://developer.intel.com/ipex-whl-stable-xpu Looking in links: https://developer.intel.com/ipex-whl-stable-xpu ERROR: Could not find a version that satisfies the requirement torch==1.10.0a0 (from versions: 2.0.0a0+gitc6a572f, 2.0.0, 2.0.1, 2.1.0) ERROR: No matching distribution found for torch==1.10.0a0

I am using Windows 11 and can't install it.

ghost commented 1 year ago

I confirm that above is still an issue. Host is bare metal Ubuntu 23.10.

docker build --tag xpu:workaround - <<'EODOCKERFILE'
# syntax=docker/dockerfile:1.6

#FROM ubuntu:mantic
FROM python:3.11-bookworm

ARG DEBIAN_FRONTEND=noninteractive
RUN --mount=target=/var/lib/apt/lists,type=cache,sharing=locked --mount=target=/var/cache/apt,type=cache,sharing=locked <<'OSPREP'
apt update
apt upgrade -y
apt install -y python3-venv wget gpg
OSPREP

# PEP668 
ENV VIRTUAL_ENV=/opt/venv
RUN python3 -m venv $VIRTUAL_ENV
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
RUN pip install --upgrade pip setuptools

# RUN pip install mkl 
# is broken? due to https://github.com/oneapi-src/oneMKL/issues/64#issuecomment-812632736
# therefore fall back to apt package
# https://www.intel.com/content/www/us/en/developer/tools/oneapi/base-toolkit-download.html?operatingsystem=linux&distributions=aptpackagemanager
RUN --mount=target=/var/lib/apt/lists,type=cache,sharing=locked --mount=target=/var/cache/apt,type=cache,sharing=locked <<'EOMKL'
wget -O- https://apt.repos.intel.com/intel-gpg-keys/GPG-PUB-KEY-INTEL-SW-PRODUCTS.PUB \ | gpg --dearmor | tee /usr/share/keyrings/oneapi-archive-keyring.gpg > /dev/null
echo "deb [signed-by=/usr/share/keyrings/oneapi-archive-keyring.gpg] https://apt.repos.intel.com/oneapi all main" | tee /etc/apt/sources.list.d/oneAPI.list
apt update
apt install -y intel-basekit
EOMKL

# https://github.com/intel/intel-extension-for-pytorch/issues/412#issuecomment-1715605398
RUN --mount=type=cache,target=/root/.cache/pip python3 -m pip install torch==2.0.1a0 torchvision==0.15.2a0 intel_extension_for_pytorch==2.0.110+xpu -f 'http://ec2-52-27-27-201.us-west-2.compute.amazonaws.com/ipex-release.php?device=xpu&repo=us&release=stable' --trusted-host ec2-52-27-27-201.us-west-2.compute.amazonaws.com

#  libgomp.so.1: cannot open shared object file: No such file or directory
RUN --mount=target=/var/lib/apt/lists,type=cache,sharing=locked --mount=target=/var/cache/apt,type=cache,sharing=locked apt install -y libgomp1

# libze_loader.so.1: cannot open shared object file: No such file or directory
RUN --mount=target=/var/lib/apt/lists,type=cache,sharing=locked --mount=target=/var/cache/apt,type=cache,sharing=locked apt install -y libze1

# torch.xpu.is_available() false
RUN --mount=target=/var/lib/apt/lists,type=cache,sharing=locked --mount=target=/var/cache/apt,type=cache,sharing=locked apt install -y intel-opencl-icd

COPY --chmod=0755 <<'EOENTRY' /docker-entrypoint.sh
#!/bin/bash
# libmkl_intel_lp64.so.2: cannot open shared object file: No such file or directory
source /opt/intel/oneapi/setvars.sh
python3
EOENTRY
ENTRYPOINT "/docker-entrypoint.sh" 
EODOCKERFILE

docker run -it --rm --device /dev/dri --user 0 xpu:workaround

:: initializing oneAPI environment ...
   docker-entrypoint.sh: BASH_VERSION = 5.2.15(1)-release
   args: Using "$@" for setvars.sh arguments:
:: advisor -- latest
:: ccl -- latest
:: compiler -- latest
:: dal -- latest
:: debugger -- latest
:: dev-utilities -- latest
:: dnnl -- latest
:: dpcpp-ct -- latest
:: dpl -- latest
:: ipp -- latest
:: ippcp -- latest
:: mkl -- latest
:: mpi -- latest
:: tbb -- latest
:: vtune -- latest
:: oneAPI environment initialized ::

Python 3.11.6 (main, Nov  1 2023, 13:35:59) [GCC 12.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import intel_extension_for_pytorch as ipex
/opt/venv/lib/python3.11/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: ''If you don't plan on using image functionality from `torchvision.io`, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have `libjpeg` or `libpng` installed before building `torchvision` from source?
  warn(
>>> import torch
>>> torch.xpu.is_available()
True
>>>
>>> print(ipex.xpu.get_device_name(0))
Intel(R) Graphics [0x56a0]