I am facing the following issue while installing apex
Error
Using pip 22.3.1 from /home/group5/anaconda3/lib/python3.10/site-packages/pip (python 3.10)
WARNING: Implying --no-binary=:all: due to the presence of --build-option / --global-option / --install-option. Consider using --config-settings for more flexibility.
DEPRECATION: --no-binary currently disables reading from the cache of locally built wheels. In the future --no-binary will not influence the wheel cache. pip 23.1 will enforce this behaviour change. A possible replacement is to use the --no-cache-dir option. You can use the flag --use-feature=no-binary-enable-wheel-cache to test the upcoming behaviour. Discussion can be found at https://github.com/pypa/pip/issues/11453
Processing /home/group5/CS744/Project/apex
Running command Preparing metadata (pyproject.toml)
Warning: Torch did not find available GPUs on this system.
If your intention is to cross-compile, this is not an error.
By default, Apex will cross-compile for Pascal (compute capabilities 6.0, 6.1, 6.2),
Volta (compute capability 7.0), Turing (compute capability 7.5),
and, if the CUDA version is >= 11.0, Ampere (compute capability 8.0).
If you wish to cross-compile for a single specific architecture,
export TORCH_CUDA_ARCH_LIST="compute capability" before running setup.py.
torch.version = 2.3.0
Traceback (most recent call last):
File "/home/group5/anaconda3/lib/python3.10/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 351, in
main()
File "/home/group5/anaconda3/lib/python3.10/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 333, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "/home/group5/anaconda3/lib/python3.10/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 152, in prepare_metadata_for_build_wheel
return hook(metadata_directory, config_settings)
File "/home/group5/anaconda3/lib/python3.10/site-packages/setuptools/build_meta.py", line 377, in prepare_metadata_for_build_wheel
self.run_setup()
File "/home/group5/anaconda3/lib/python3.10/site-packages/setuptools/build_meta.py", line 335, in run_setup
exec(code, locals())
File "", line 137, in
File "", line 24, in get_cuda_bare_metal_version
TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'
error: subprocess-exited-with-error
× Preparing metadata (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
full command: /home/group5/anaconda3/bin/python /home/group5/anaconda3/lib/python3.10/site-packages/pip/_vendor/pep517/in_process/_in_process.py prepare_metadata_for_build_wheel /tmp/tmp_jnyrrpl
cwd: /home/group5/CS744/Project/apex
Preparing metadata (pyproject.toml) ... error
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
I am facing the following issue while installing
apex
Error
Using pip 22.3.1 from /home/group5/anaconda3/lib/python3.10/site-packages/pip (python 3.10) WARNING: Implying --no-binary=:all: due to the presence of --build-option / --global-option / --install-option. Consider using --config-settings for more flexibility. DEPRECATION: --no-binary currently disables reading from the cache of locally built wheels. In the future --no-binary will not influence the wheel cache. pip 23.1 will enforce this behaviour change. A possible replacement is to use the --no-cache-dir option. You can use the flag --use-feature=no-binary-enable-wheel-cache to test the upcoming behaviour. Discussion can be found at https://github.com/pypa/pip/issues/11453 Processing /home/group5/CS744/Project/apex Running command Preparing metadata (pyproject.toml)
Warning: Torch did not find available GPUs on this system. If your intention is to cross-compile, this is not an error. By default, Apex will cross-compile for Pascal (compute capabilities 6.0, 6.1, 6.2), Volta (compute capability 7.0), Turing (compute capability 7.5), and, if the CUDA version is >= 11.0, Ampere (compute capability 8.0). If you wish to cross-compile for a single specific architecture, export TORCH_CUDA_ARCH_LIST="compute capability" before running setup.py.
torch.version = 2.3.0
Traceback (most recent call last): File "/home/group5/anaconda3/lib/python3.10/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 351, in
main()
File "/home/group5/anaconda3/lib/python3.10/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 333, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "/home/group5/anaconda3/lib/python3.10/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 152, in prepare_metadata_for_build_wheel
return hook(metadata_directory, config_settings)
File "/home/group5/anaconda3/lib/python3.10/site-packages/setuptools/build_meta.py", line 377, in prepare_metadata_for_build_wheel
self.run_setup()
File "/home/group5/anaconda3/lib/python3.10/site-packages/setuptools/build_meta.py", line 335, in run_setup
exec(code, locals())
File "", line 137, in
File "", line 24, in get_cuda_bare_metal_version
TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'
error: subprocess-exited-with-error
× Preparing metadata (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip. full command: /home/group5/anaconda3/bin/python /home/group5/anaconda3/lib/python3.10/site-packages/pip/_vendor/pep517/in_process/_in_process.py prepare_metadata_for_build_wheel /tmp/tmp_jnyrrpl cwd: /home/group5/CS744/Project/apex Preparing metadata (pyproject.toml) ... error error: metadata-generation-failed
× Encountered error while generating package metadata. ╰─> See above for output.
note: This is an issue with the package mentioned above, not pip. hint: See above for details.
Steps to Reproduce the Error
conda create --name working_environment
conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch
conda install anaconda::nltk
git clone https://github.com/NVIDIA/apex
cd apex
pip install -v --disable-pip-version-check --no-cache-dir --no-build-isolation --global-option="--cpp_ext" --global-option="--cuda_ext" ./
Environment
Cuda Toolkit Version
Python
Python 3.10.14
Conda
conda 23.3.1
Packages Installed