Open saiprasanna7 opened 1 year ago
For Workload : SpecCpu-2017 ERROR: failed to solve: failed to compute cache key: failed to calculate the checksum of ref 8f74b3ec-14bb-4b3f-bf4a-2415442bc78c::s1b8k8203tvipdcv00euqyggg: "/data": not found
As readme mentioned: need user manually create /data folder and export related binaries --> https://github.com/intel-innersource/applications.benchmarking.benchmark.external-platform-hero-features/blob/23.3_external/workload/SpecCpu-2017/README.md
For Workloads: SmartScience-YOLO-MSTCN-OpenVINO, Video-Structure, 3DHuman-Pose-Estimation These 3 WLs all need customers to refer to the steps mentioned in readme to add some necessary files before make, hope these can solve the problem you mentioned. SmartScience-YOLO-MSTCN-OpenVINO: https://github.com/intel/workload-services-framework/blob/main/workload/SmartScience-YOLO-MSTCN-OpenVINO/README.md#preparation Video-Structure: https://github.com/intel/workload-services-framework/blob/main/workload/Video-Structure/README.md#preparation 3DHuman-Pose-Estimation: https://github.com/intel/workload-services-framework/blob/main/stack/3DHuman-Pose/README.md#usage
For Workload : SpecCpu-2017 ERROR: failed to solve: failed to compute cache key: failed to calculate the checksum of ref 8f74b3ec-14bb-4b3f-bf4a-2415442bc78c::s1b8k8203tvipdcv00euqyggg: "/data": not found
As readme mentioned: need user manually create /data folder and export related binaries --> https://github.com/intel-innersource/applications.benchmarking.benchmark.external-platform-hero-features/blob/23.3_external/workload/SpecCpu-2017/README.md
Sure, where can we find these binaries?
For Workloads: SmartScience-YOLO-MSTCN-OpenVINO, Video-Structure, 3DHuman-Pose-Estimation These 3 WLs all need customers to refer to the steps mentioned in readme to add some necessary files before make, hope these can solve the problem you mentioned. SmartScience-YOLO-MSTCN-OpenVINO: https://github.com/intel/workload-services-framework/blob/main/workload/SmartScience-YOLO-MSTCN-OpenVINO/README.md#preparation Video-Structure: https://github.com/intel/workload-services-framework/blob/main/workload/Video-Structure/README.md#preparation 3DHuman-Pose-Estimation: https://github.com/intel/workload-services-framework/blob/main/stack/3DHuman-Pose/README.md#usage
Can you provide information on how can we get these files in order to build the docker image? I see there are files attached in README.md for 3D-Human-Pose-Estimation, but not for other workloads.
Can you provide information on Python-related errors for BERTLarge-PyTorch-Xeon-Public & ResNet50-PyTorch-Xeon-Public?
For Workload : SpecCpu-2017 ERROR: failed to solve: failed to compute cache key: failed to calculate the checksum of ref 8f74b3ec-14bb-4b3f-bf4a-2415442bc78c::s1b8k8203tvipdcv00euqyggg: "/data": not found As readme mentioned: need user manually create /data folder and export related binaries --> https://github.com/intel-innersource/applications.benchmarking.benchmark.external-platform-hero-features/blob/23.3_external/workload/SpecCpu-2017/README.md
Sure, where can we find these binaries?
sure, I will share SpecCpu-2017 related intel internal binary url to you via email.
Can you provide information on Python-related errors for BERTLarge-PyTorch-Xeon-Public & ResNet50-PyTorch-Xeon-Public?
After contact Dev, PR: https://github.com/intel/workload-services-framework/pull/46 for this issue
Can you provide information on Python-related errors for BERTLarge-PyTorch-Xeon-Public & ResNet50-PyTorch-Xeon-Public?
After contact Dev, PR: #46 for this issue
Tried building with this patch, the BERTLarge workload is failing with following errors:
#18 27.78 Requirement already satisfied: joblib in /root/anaconda3/lib/python3.10/site-packages (from sacremoses->transformers==3.0.2) (1.2.0)
#18 27.84 Building wheels for collected packages: tokenizers, sacremoses
#18 27.84 Building wheel for tokenizers (pyproject.toml): started
#18 28.27 Building wheel for tokenizers (pyproject.toml): finished with status 'error'
#18 28.28 error: subprocess-exited-with-error
#18 28.28
#18 28.28 × Building wheel for tokenizers (pyproject.toml) did not run successfully.
#18 28.28 │ exit code: 1
#18 28.28 ╰─> [48 lines of output]
#18 28.28 /tmp/pip-build-env-_l68llis/overlay/lib/python3.10/site-packages/setuptools/dist.py:314: InformationOnly: Normalizing '0.8.1.rc1' to '0.8.1rc1'
#18 28.28 self.metadata.version = self._normalize_version(
#18 28.28 running bdist_wheel
#18 28.28 running build
#18 28.28 running build_py
#18 28.28 creating build
#18 28.28 creating build/lib.linux-x86_64-cpython-310
#18 28.28 creating build/lib.linux-x86_64-cpython-310/tokenizers
#18 28.28 copying tokenizers/__init__.py -> build/lib.linux-x86_64-cpython-310/tokenizers
#18 28.28 creating build/lib.linux-x86_64-cpython-310/tokenizers/models
#18 28.28 copying tokenizers/models/__init__.py -> build/lib.linux-x86_64-cpython-310/tokenizers/models
#18 28.28 creating build/lib.linux-x86_64-cpython-310/tokenizers/decoders
#18 28.28 copying tokenizers/decoders/__init__.py -> build/lib.linux-x86_64-cpython-310/tokenizers/decoders
#18 28.28 creating build/lib.linux-x86_64-cpython-310/tokenizers/normalizers
#18 28.28 copying tokenizers/normalizers/__init__.py -> build/lib.linux-x86_64-cpython-310/tokenizers/normalizers
#18 28.28 creating build/lib.linux-x86_64-cpython-310/tokenizers/pre_tokenizers
#18 28.28 copying tokenizers/pre_tokenizers/__init__.py -> build/lib.linux-x86_64-cpython-310/tokenizers/pre_tokenizers
#18 28.28 creating build/lib.linux-x86_64-cpython-310/tokenizers/processors
#18 28.28 copying tokenizers/processors/__init__.py -> build/lib.linux-x86_64-cpython-310/tokenizers/processors
#18 28.28 creating build/lib.linux-x86_64-cpython-310/tokenizers/trainers
#18 28.28 copying tokenizers/trainers/__init__.py -> build/lib.linux-x86_64-cpython-310/tokenizers/trainers
#18 28.28 creating build/lib.linux-x86_64-cpython-310/tokenizers/implementations
#18 28.28 copying tokenizers/implementations/__init__.py -> build/lib.linux-x86_64-cpython-310/tokenizers/implementations
#18 28.28 copying tokenizers/implementations/base_tokenizer.py -> build/lib.linux-x86_64-cpython-310/tokenizers/implementations
#18 28.28 copying tokenizers/implementations/bert_wordpiece.py -> build/lib.linux-x86_64-cpython-310/tokenizers/implementations
#18 28.28 copying tokenizers/implementations/char_level_bpe.py -> build/lib.linux-x86_64-cpython-310/tokenizers/implementations
#18 28.28 copying tokenizers/implementations/sentencepiece_bpe.py -> build/lib.linux-x86_64-cpython-310/tokenizers/implementations
#18 28.28 copying tokenizers/implementations/byte_level_bpe.py -> build/lib.linux-x86_64-cpython-310/tokenizers/implementations
#18 28.28 copying tokenizers/__init__.pyi -> build/lib.linux-x86_64-cpython-310/tokenizers
#18 28.28 copying tokenizers/models/__init__.pyi -> build/lib.linux-x86_64-cpython-310/tokenizers/models
#18 28.28 copying tokenizers/decoders/__init__.pyi -> build/lib.linux-x86_64-cpython-310/tokenizers/decoders
#18 28.28 copying tokenizers/normalizers/__init__.pyi -> build/lib.linux-x86_64-cpython-310/tokenizers/normalizers
#18 28.28 copying tokenizers/pre_tokenizers/__init__.pyi -> build/lib.linux-x86_64-cpython-310/tokenizers/pre_tokenizers
#18 28.28 copying tokenizers/processors/__init__.pyi -> build/lib.linux-x86_64-cpython-310/tokenizers/processors
#18 28.28 copying tokenizers/trainers/__init__.pyi -> build/lib.linux-x86_64-cpython-310/tokenizers/trainers
#18 28.28 running build_ext
#18 28.28 running build_rust
#18 28.28 error: can't find Rust compiler
#18 28.28
#18 28.28 If you are using an outdated pip version, it is possible a prebuilt wheel is available for this package but pip is not able to install from it. Installing from the wheel would avoid the need for a Rust compiler.
#18 28.28
#18 28.28 To update pip, run:
#18 28.28
#18 28.28 pip install --upgrade pip
#18 28.28
#18 28.28 and then retry package installation.
#18 28.28
#18 28.28 If you did intend to build this package from source, try installing a Rust compiler from your system package manager and ensure it is on the PATH during installation. Alternatively, rustup (available at https://rustup.rs) is the recommended way to download and update the Rust compiler toolchain.
#18 28.28 [end of output]
#18 28.28
#18 28.28 note: This error originates from a subprocess, and is likely not a problem with pip.
#18 28.29 ERROR: Failed building wheel for tokenizers
#18 28.29 Building wheel for sacremoses (setup.py): started
#18 29.37 Building wheel for sacremoses (setup.py): finished with status 'done'
#18 29.38 Created wheel for sacremoses: filename=sacremoses-0.0.53-py3-none-any.whl size=895241 sha256=a58105eaac7a12184a43fc033ef7a7510230af243983494f6ad41d52989c879d
#18 29.38 Stored in directory: /root/.cache/pip/wheels/00/24/97/a2ea5324f36bc626e1ea0267f33db6aa80d157ee977e9e42fb
#18 29.39 Successfully built sacremoses
#18 29.39 Failed to build tokenizers
#18 29.39 ERROR: Could not build wheels for tokenizers, which is required to install pyproject.toml-based projects
#18 29.39
#18 29.39 [notice] A new release of pip is available: 23.1.1 -> 23.3
#18 29.39 [notice] To update, run: pip install --upgrade pip
------
process "/bin/bash -c source activate base && cd quickstart/language_modeling/pytorch/bert_large/inference/cpu && git clone https://github.com/huggingface/transformers.git && cd transformers && git checkout v3.0.2 && git apply ../enable_ipex_for_squad.diff && pip install -e ./ && pip install tensorboard tensorboardX" did not complete successfully: exit code: 1
workload/BERTLarge-PyTorch-Xeon-Public/CMakeFiles/build_bertlarge-pytorch-xeon-public.dir/build.make:57: recipe for target 'workload/BERTLarge-PyTorch-Xeon-Public/CMakeFiles/build_bertlarge-pytorch-xeon-public' failed
make[2]: *** [workload/BERTLarge-PyTorch-Xeon-Public/CMakeFiles/build_bertlarge-pytorch-xeon-public] Error 1
CMakeFiles/Makefile2:985: recipe for target 'workload/BERTLarge-PyTorch-Xeon-Public/CMakeFiles/build_bertlarge-pytorch-xeon-public.dir/all' failed
make[1]: *** [workload/BERTLarge-PyTorch-Xeon-Public/CMakeFiles/build_bertlarge-pytorch-xeon-public.dir/all] Error 2
Makefile:94: recipe for target 'all' failed
make: *** [all] Error 2
Can you provide information on Python-related errors for BERTLarge-PyTorch-Xeon-Public & ResNet50-PyTorch-Xeon-Public?
After contact Dev, PR: #46 for this issue
Tried building with this patch, the BERTLarge workload is failing with following errors:
#18 27.78 Requirement already satisfied: joblib in /root/anaconda3/lib/python3.10/site-packages (from sacremoses->transformers==3.0.2) (1.2.0) #18 27.84 Building wheels for collected packages: tokenizers, sacremoses #18 27.84 Building wheel for tokenizers (pyproject.toml): started #18 28.27 Building wheel for tokenizers (pyproject.toml): finished with status 'error' #18 28.28 error: subprocess-exited-with-error #18 28.28 #18 28.28 × Building wheel for tokenizers (pyproject.toml) did not run successfully. #18 28.28 │ exit code: 1 #18 28.28 ╰─> [48 lines of output] #18 28.28 /tmp/pip-build-env-_l68llis/overlay/lib/python3.10/site-packages/setuptools/dist.py:314: InformationOnly: Normalizing '0.8.1.rc1' to '0.8.1rc1' #18 28.28 self.metadata.version = self._normalize_version( #18 28.28 running bdist_wheel #18 28.28 running build #18 28.28 running build_py #18 28.28 creating build #18 28.28 creating build/lib.linux-x86_64-cpython-310 #18 28.28 creating build/lib.linux-x86_64-cpython-310/tokenizers #18 28.28 copying tokenizers/__init__.py -> build/lib.linux-x86_64-cpython-310/tokenizers #18 28.28 creating build/lib.linux-x86_64-cpython-310/tokenizers/models #18 28.28 copying tokenizers/models/__init__.py -> build/lib.linux-x86_64-cpython-310/tokenizers/models #18 28.28 creating build/lib.linux-x86_64-cpython-310/tokenizers/decoders #18 28.28 copying tokenizers/decoders/__init__.py -> build/lib.linux-x86_64-cpython-310/tokenizers/decoders #18 28.28 creating build/lib.linux-x86_64-cpython-310/tokenizers/normalizers #18 28.28 copying tokenizers/normalizers/__init__.py -> build/lib.linux-x86_64-cpython-310/tokenizers/normalizers #18 28.28 creating build/lib.linux-x86_64-cpython-310/tokenizers/pre_tokenizers #18 28.28 copying tokenizers/pre_tokenizers/__init__.py -> build/lib.linux-x86_64-cpython-310/tokenizers/pre_tokenizers #18 28.28 creating build/lib.linux-x86_64-cpython-310/tokenizers/processors #18 28.28 copying tokenizers/processors/__init__.py -> build/lib.linux-x86_64-cpython-310/tokenizers/processors #18 28.28 creating build/lib.linux-x86_64-cpython-310/tokenizers/trainers #18 28.28 copying tokenizers/trainers/__init__.py -> build/lib.linux-x86_64-cpython-310/tokenizers/trainers #18 28.28 creating build/lib.linux-x86_64-cpython-310/tokenizers/implementations #18 28.28 copying tokenizers/implementations/__init__.py -> build/lib.linux-x86_64-cpython-310/tokenizers/implementations #18 28.28 copying tokenizers/implementations/base_tokenizer.py -> build/lib.linux-x86_64-cpython-310/tokenizers/implementations #18 28.28 copying tokenizers/implementations/bert_wordpiece.py -> build/lib.linux-x86_64-cpython-310/tokenizers/implementations #18 28.28 copying tokenizers/implementations/char_level_bpe.py -> build/lib.linux-x86_64-cpython-310/tokenizers/implementations #18 28.28 copying tokenizers/implementations/sentencepiece_bpe.py -> build/lib.linux-x86_64-cpython-310/tokenizers/implementations #18 28.28 copying tokenizers/implementations/byte_level_bpe.py -> build/lib.linux-x86_64-cpython-310/tokenizers/implementations #18 28.28 copying tokenizers/__init__.pyi -> build/lib.linux-x86_64-cpython-310/tokenizers #18 28.28 copying tokenizers/models/__init__.pyi -> build/lib.linux-x86_64-cpython-310/tokenizers/models #18 28.28 copying tokenizers/decoders/__init__.pyi -> build/lib.linux-x86_64-cpython-310/tokenizers/decoders #18 28.28 copying tokenizers/normalizers/__init__.pyi -> build/lib.linux-x86_64-cpython-310/tokenizers/normalizers #18 28.28 copying tokenizers/pre_tokenizers/__init__.pyi -> build/lib.linux-x86_64-cpython-310/tokenizers/pre_tokenizers #18 28.28 copying tokenizers/processors/__init__.pyi -> build/lib.linux-x86_64-cpython-310/tokenizers/processors #18 28.28 copying tokenizers/trainers/__init__.pyi -> build/lib.linux-x86_64-cpython-310/tokenizers/trainers #18 28.28 running build_ext #18 28.28 running build_rust #18 28.28 error: can't find Rust compiler #18 28.28 #18 28.28 If you are using an outdated pip version, it is possible a prebuilt wheel is available for this package but pip is not able to install from it. Installing from the wheel would avoid the need for a Rust compiler. #18 28.28 #18 28.28 To update pip, run: #18 28.28 #18 28.28 pip install --upgrade pip #18 28.28 #18 28.28 and then retry package installation. #18 28.28 #18 28.28 If you did intend to build this package from source, try installing a Rust compiler from your system package manager and ensure it is on the PATH during installation. Alternatively, rustup (available at https://rustup.rs) is the recommended way to download and update the Rust compiler toolchain. #18 28.28 [end of output] #18 28.28 #18 28.28 note: This error originates from a subprocess, and is likely not a problem with pip. #18 28.29 ERROR: Failed building wheel for tokenizers #18 28.29 Building wheel for sacremoses (setup.py): started #18 29.37 Building wheel for sacremoses (setup.py): finished with status 'done' #18 29.38 Created wheel for sacremoses: filename=sacremoses-0.0.53-py3-none-any.whl size=895241 sha256=a58105eaac7a12184a43fc033ef7a7510230af243983494f6ad41d52989c879d #18 29.38 Stored in directory: /root/.cache/pip/wheels/00/24/97/a2ea5324f36bc626e1ea0267f33db6aa80d157ee977e9e42fb #18 29.39 Successfully built sacremoses #18 29.39 Failed to build tokenizers #18 29.39 ERROR: Could not build wheels for tokenizers, which is required to install pyproject.toml-based projects #18 29.39 #18 29.39 [notice] A new release of pip is available: 23.1.1 -> 23.3 #18 29.39 [notice] To update, run: pip install --upgrade pip ------ process "/bin/bash -c source activate base && cd quickstart/language_modeling/pytorch/bert_large/inference/cpu && git clone https://github.com/huggingface/transformers.git && cd transformers && git checkout v3.0.2 && git apply ../enable_ipex_for_squad.diff && pip install -e ./ && pip install tensorboard tensorboardX" did not complete successfully: exit code: 1 workload/BERTLarge-PyTorch-Xeon-Public/CMakeFiles/build_bertlarge-pytorch-xeon-public.dir/build.make:57: recipe for target 'workload/BERTLarge-PyTorch-Xeon-Public/CMakeFiles/build_bertlarge-pytorch-xeon-public' failed make[2]: *** [workload/BERTLarge-PyTorch-Xeon-Public/CMakeFiles/build_bertlarge-pytorch-xeon-public] Error 1 CMakeFiles/Makefile2:985: recipe for target 'workload/BERTLarge-PyTorch-Xeon-Public/CMakeFiles/build_bertlarge-pytorch-xeon-public.dir/all' failed make[1]: *** [workload/BERTLarge-PyTorch-Xeon-Public/CMakeFiles/build_bertlarge-pytorch-xeon-public.dir/all] Error 2 Makefile:94: recipe for target 'all' failed make: *** [all] Error 2
Tried upgrading the pip to 23.3 but still fails with the same error
Can you provide information on Python-related errors for BERTLarge-PyTorch-Xeon-Public & ResNet50-PyTorch-Xeon-Public?
After contact Dev, PR: #46 for this issue
Tried building with this patch, the BERTLarge workload is failing with following errors:
#18 27.78 Requirement already satisfied: joblib in /root/anaconda3/lib/python3.10/site-packages (from sacremoses->transformers==3.0.2) (1.2.0) #18 27.84 Building wheels for collected packages: tokenizers, sacremoses #18 27.84 Building wheel for tokenizers (pyproject.toml): started #18 28.27 Building wheel for tokenizers (pyproject.toml): finished with status 'error' #18 28.28 error: subprocess-exited-with-error #18 28.28 #18 28.28 × Building wheel for tokenizers (pyproject.toml) did not run successfully. #18 28.28 │ exit code: 1 #18 28.28 ╰─> [48 lines of output] #18 28.28 /tmp/pip-build-env-_l68llis/overlay/lib/python3.10/site-packages/setuptools/dist.py:314: InformationOnly: Normalizing '0.8.1.rc1' to '0.8.1rc1' #18 28.28 self.metadata.version = self._normalize_version( #18 28.28 running bdist_wheel #18 28.28 running build #18 28.28 running build_py #18 28.28 creating build #18 28.28 creating build/lib.linux-x86_64-cpython-310 #18 28.28 creating build/lib.linux-x86_64-cpython-310/tokenizers #18 28.28 copying tokenizers/__init__.py -> build/lib.linux-x86_64-cpython-310/tokenizers #18 28.28 creating build/lib.linux-x86_64-cpython-310/tokenizers/models #18 28.28 copying tokenizers/models/__init__.py -> build/lib.linux-x86_64-cpython-310/tokenizers/models #18 28.28 creating build/lib.linux-x86_64-cpython-310/tokenizers/decoders #18 28.28 copying tokenizers/decoders/__init__.py -> build/lib.linux-x86_64-cpython-310/tokenizers/decoders #18 28.28 creating build/lib.linux-x86_64-cpython-310/tokenizers/normalizers #18 28.28 copying tokenizers/normalizers/__init__.py -> build/lib.linux-x86_64-cpython-310/tokenizers/normalizers #18 28.28 creating build/lib.linux-x86_64-cpython-310/tokenizers/pre_tokenizers #18 28.28 copying tokenizers/pre_tokenizers/__init__.py -> build/lib.linux-x86_64-cpython-310/tokenizers/pre_tokenizers #18 28.28 creating build/lib.linux-x86_64-cpython-310/tokenizers/processors #18 28.28 copying tokenizers/processors/__init__.py -> build/lib.linux-x86_64-cpython-310/tokenizers/processors #18 28.28 creating build/lib.linux-x86_64-cpython-310/tokenizers/trainers #18 28.28 copying tokenizers/trainers/__init__.py -> build/lib.linux-x86_64-cpython-310/tokenizers/trainers #18 28.28 creating build/lib.linux-x86_64-cpython-310/tokenizers/implementations #18 28.28 copying tokenizers/implementations/__init__.py -> build/lib.linux-x86_64-cpython-310/tokenizers/implementations #18 28.28 copying tokenizers/implementations/base_tokenizer.py -> build/lib.linux-x86_64-cpython-310/tokenizers/implementations #18 28.28 copying tokenizers/implementations/bert_wordpiece.py -> build/lib.linux-x86_64-cpython-310/tokenizers/implementations #18 28.28 copying tokenizers/implementations/char_level_bpe.py -> build/lib.linux-x86_64-cpython-310/tokenizers/implementations #18 28.28 copying tokenizers/implementations/sentencepiece_bpe.py -> build/lib.linux-x86_64-cpython-310/tokenizers/implementations #18 28.28 copying tokenizers/implementations/byte_level_bpe.py -> build/lib.linux-x86_64-cpython-310/tokenizers/implementations #18 28.28 copying tokenizers/__init__.pyi -> build/lib.linux-x86_64-cpython-310/tokenizers #18 28.28 copying tokenizers/models/__init__.pyi -> build/lib.linux-x86_64-cpython-310/tokenizers/models #18 28.28 copying tokenizers/decoders/__init__.pyi -> build/lib.linux-x86_64-cpython-310/tokenizers/decoders #18 28.28 copying tokenizers/normalizers/__init__.pyi -> build/lib.linux-x86_64-cpython-310/tokenizers/normalizers #18 28.28 copying tokenizers/pre_tokenizers/__init__.pyi -> build/lib.linux-x86_64-cpython-310/tokenizers/pre_tokenizers #18 28.28 copying tokenizers/processors/__init__.pyi -> build/lib.linux-x86_64-cpython-310/tokenizers/processors #18 28.28 copying tokenizers/trainers/__init__.pyi -> build/lib.linux-x86_64-cpython-310/tokenizers/trainers #18 28.28 running build_ext #18 28.28 running build_rust #18 28.28 error: can't find Rust compiler #18 28.28 #18 28.28 If you are using an outdated pip version, it is possible a prebuilt wheel is available for this package but pip is not able to install from it. Installing from the wheel would avoid the need for a Rust compiler. #18 28.28 #18 28.28 To update pip, run: #18 28.28 #18 28.28 pip install --upgrade pip #18 28.28 #18 28.28 and then retry package installation. #18 28.28 #18 28.28 If you did intend to build this package from source, try installing a Rust compiler from your system package manager and ensure it is on the PATH during installation. Alternatively, rustup (available at https://rustup.rs) is the recommended way to download and update the Rust compiler toolchain. #18 28.28 [end of output] #18 28.28 #18 28.28 note: This error originates from a subprocess, and is likely not a problem with pip. #18 28.29 ERROR: Failed building wheel for tokenizers #18 28.29 Building wheel for sacremoses (setup.py): started #18 29.37 Building wheel for sacremoses (setup.py): finished with status 'done' #18 29.38 Created wheel for sacremoses: filename=sacremoses-0.0.53-py3-none-any.whl size=895241 sha256=a58105eaac7a12184a43fc033ef7a7510230af243983494f6ad41d52989c879d #18 29.38 Stored in directory: /root/.cache/pip/wheels/00/24/97/a2ea5324f36bc626e1ea0267f33db6aa80d157ee977e9e42fb #18 29.39 Successfully built sacremoses #18 29.39 Failed to build tokenizers #18 29.39 ERROR: Could not build wheels for tokenizers, which is required to install pyproject.toml-based projects #18 29.39 #18 29.39 [notice] A new release of pip is available: 23.1.1 -> 23.3 #18 29.39 [notice] To update, run: pip install --upgrade pip ------ process "/bin/bash -c source activate base && cd quickstart/language_modeling/pytorch/bert_large/inference/cpu && git clone https://github.com/huggingface/transformers.git && cd transformers && git checkout v3.0.2 && git apply ../enable_ipex_for_squad.diff && pip install -e ./ && pip install tensorboard tensorboardX" did not complete successfully: exit code: 1 workload/BERTLarge-PyTorch-Xeon-Public/CMakeFiles/build_bertlarge-pytorch-xeon-public.dir/build.make:57: recipe for target 'workload/BERTLarge-PyTorch-Xeon-Public/CMakeFiles/build_bertlarge-pytorch-xeon-public' failed make[2]: *** [workload/BERTLarge-PyTorch-Xeon-Public/CMakeFiles/build_bertlarge-pytorch-xeon-public] Error 1 CMakeFiles/Makefile2:985: recipe for target 'workload/BERTLarge-PyTorch-Xeon-Public/CMakeFiles/build_bertlarge-pytorch-xeon-public.dir/all' failed make[1]: *** [workload/BERTLarge-PyTorch-Xeon-Public/CMakeFiles/build_bertlarge-pytorch-xeon-public.dir/all] Error 2 Makefile:94: recipe for target 'all' failed make: *** [all] Error 2
Tried upgrading the pip to 23.3 but still fails with the same error
Since you have upgraded the PyTorch base stack. you also need to bump the transformer version and benchmark code version as well. Consider:
spr-launch-public
to pytorch-r2.0-models
168256a
transformer
from v3.0.2
to v4.18.0
and also the EVAL_SCRIPT
(in Dockerfile.1.inference)Those change had already made in the innersource, please refer to PR8275 and PR8417
For Workload : SpecCpu-2017 ERROR: failed to solve: failed to compute cache key: failed to calculate the checksum of ref 8f74b3ec-14bb-4b3f-bf4a-2415442bc78c::s1b8k8203tvipdcv00euqyggg: "/data": not found As readme mentioned: need user manually create /data folder and export related binaries --> https://github.com/intel-innersource/applications.benchmarking.benchmark.external-platform-hero-features/blob/23.3_external/workload/SpecCpu-2017/README.md
Sure, where can we find these binaries?
SpecCPU is a commercial benchmark so you need to purchase a license from https://spec.org/cpu2017/
Summary
We're trying to build docker images for the WSF external workloads using the make command with the v23.3 release, the docker image builds are failing for some of the workloads with various issues.
Build Issues
Workloads and their respective issues
Workload : BERTLarge-PyTorch-Xeon-Public
ERROR: Could not find a version that satisfies the requirement tornado==6.3.3 ERROR: No matching distribution found for tornado==6.3.3
Workload : SpecCpu-2017
ERROR: failed to solve: failed to compute cache key: failed to calculate the checksum of ref 8f74b3ec-14bb-4b3f-bf4a-2415442bc78c::s1b8k8203tvipdcv00euqyggg: "/data": not found
Workload : SmartScience-YOLO-MSTCN-OpenVINO
ERROR: failed to solve: failed to compute cache key: failed to calculate checksum of ref 7076f431-11b5-40fa-bd05-b9b81ac8589c::fheik60wiqbjlxsagh6d5d4hc: "/script": not found
Workload : SPDK-NVMe-o-TCP
Docker builds are not happening when trying to run make from the workload directory. Looks like there is a bug with Make Steps.
Workload : Malconv
AssertionError: Framework is not detected correctly from the model format. This could be caused by an unsupported model or inappropriate framework installation.
Workload : Video-Structure
Same issue as SPDK-NVMe-o-TCP
Workload : ResNet50-PyTorch-Xeon-Public
ERROR: Could not find a version that satisfies the requirement tornado==6.3.3 ERROR: No matching distribution found for tornado==6.3.3
Workload: 3DHuman-Pose-Estimation
ERROR: failed to solve: failed to compute cache key: failed to calculate the checksum of ref 8f74b3ec-14bb-4b3f-bf4a-2415442bc78c::lsi9ljjqcrg6rzsiznnqlueve: "/motion-tracking-sdk": not found