Closed BrianSwift-Intel closed 1 week ago
Yes, deepspeed tracks this as well: https://github.com/microsoft/DeepSpeed/issues/5671
As a workaround, they suggest fixing the numpy version to <2.0.0. There are several ways to do this like adding another pip command at the end of the Dockerfile to install it pip install "numpy<2.0.0"
or adding the numpy version to tools/env_setup.sh. This work for me for now, but seems to be a dependency one may need to carry forward at least in the short term...
Yes, pls downgrade numpy to 1.26.4 as a workaround.
@YuningQiu the only problem I had building the docker image was related to need for proxy specification. Following shows my proxy environment variables, commands used to build and run docker image, and "sanity check" command.
ppalab@emrserver:~$ env | grep -i prox
no_proxy=127.0.0.1,localhost,intel.com,.intel.com
https_proxy=
Intel standard internal proxy
http_proxy=
Intel standard internal proxy
time git clone https://github.com/intel/intel-extension-for-pytorch.git
cd intel-extension-for-pytorch
time git checkout v2.3.0+cpu
time git submodule sync
time git submodule update --init --recursive
date ; DOCKER_BUILDKIT=1 time docker build --build-arg http_proxy=${http_proxy} --build-arg https_proxy=${https_proxy} -f examples/cpu/inference/python/llm/Dockerfile -t ipex-llm:2.3.0 . ; date
mkdir ~/bswift/containerShared
docker run -v $HOME/bswift/containerShared:/home/ubuntu/containerShared -e https_proxy=$https_proxy -e http_proxy=$http_proxy -e no_proxy=$no_proxy --rm -it --privileged ipex-llm:2.3.0 bash
cd llm
source ./tools/env_activate.sh
python -c "import torch; import intel_extension_for_pytorch as ipex; print(torch.__version__); print(ipex.__version__);"
Thanks @BrianSwift-Intel ! For this issue, do you have other questions, or can we close this issue?
Built v2.3.100 docker image per instructions at https://intel.github.io/intel-extension-for-pytorch/#installation?platform=cpu&version=v2.3.100%2bcpu&os=linux%2fwsl2&package=docker for "* Install from prebuilt wheel files" and ran "sanity check"
mkdir ipex2.3.100
cd ipex2.3.100
wget https://github.com/intel/intel-extension-for-pytorch/raw/v2.3.100+cpu/docker/Dockerfile.prebuilt
DOCKER_BUILDKIT=1 time docker build --build-arg http_proxy=${http_proxy} --build-arg https_proxy=${https_proxy} -f Dockerfile.prebuilt -t ipex_prebuilt:2.3.100 .
docker run -e https_proxy=$https_proxy -e http_proxy=$http_proxy -e no_proxy=$no_proxy --rm -it --privileged ipex_prebuilt:2.3.100
python -c "import torch; import intel_extension_for_pytorch as ipex; print(torch.__version__); print(ipex.__version__);"
2.3.0+cpu
2.3.100+cpu
Also did 3.3 Docker-based environment setup with compilation from source and executed some llama2 performance test.
All worked without producing "cannot import name 'BUFSIZE' from 'numpy'" - traceback Closing.
Describe the bug
After building docker image following [RECOMMENDED] Docker-based environment setup with pre-built wheels executing Sanity Check results in traceback:
A docker image I built with the same process Friday (6/14/2024) works ok.
Maybe this is related to numpy version change to 2.0.0 that was just released (6/16/2024).
Versions
In the docker image that works, numpy version is 1.26.4