NVIDIA / apex

A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch
BSD 3-Clause "New" or "Revised" License
8.16k stars 1.35k forks source link

Unable to install Apex #1810

Open Anupam-5 opened 2 weeks ago

Anupam-5 commented 2 weeks ago

command passed: git clone https://github.com/NVIDIA/apex cd apex pip install -v --disable-pip-version-check --no-cache-dir --no-build-isolation --config-settings "--build-option=--cpp_ext" --config-settings "--build-option=--cuda_ext" ./

Output:

kinshuk@kinshuk-lab:/media/kinshuk/My Book/molformer/notebooks/pretrained_molformer/apex$ pip install -v --disable-pip-version-check --no-cache-dir --no-build-isolation --config-settings "--build-option=--cpp_ext" --config-settings "--build-option=--cuda_ext" ./ Using pip 24.0 from /home/kinshuk/anaconda3/envs/MolTran_CUDA11/lib/python3.8/site-packages/pip (python 3.8) Processing /media/kinshuk/My Book/molformer/notebooks/pretrained_molformer/apex Running command Preparing metadata (pyproject.toml)

torch.version = 1.7.1

Traceback (most recent call last): File "/home/kinshuk/anaconda3/envs/MolTran_CUDA11/lib/python3.8/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in main() File "/home/kinshuk/anaconda3/envs/MolTran_CUDA11/lib/python3.8/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main json_out['return_val'] = hook(hook_input['kwargs']) File "/home/kinshuk/anaconda3/envs/MolTran_CUDA11/lib/python3.8/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 149, in prepare_metadata_for_build_wheel return hook(metadata_directory, config_settings) File "/home/kinshuk/anaconda3/envs/MolTran_CUDA11/lib/python3.8/site-packages/setuptools/build_meta.py", line 366, in prepare_metadata_for_build_wheel self.run_setup() File "/home/kinshuk/anaconda3/envs/MolTran_CUDA11/lib/python3.8/site-packages/setuptools/build_meta.py", line 311, in run_setup exec(code, locals()) File "", line 137, in File "", line 24, in get_cuda_bare_metal_version File "/home/kinshuk/anaconda3/envs/MolTran_CUDA11/lib/python3.8/subprocess.py", line 415, in check_output return run(popenargs, stdout=PIPE, timeout=timeout, check=True, File "/home/kinshuk/anaconda3/envs/MolTran_CUDA11/lib/python3.8/subprocess.py", line 493, in run with Popen(popenargs, kwargs) as process: File "/home/kinshuk/anaconda3/envs/MolTran_CUDA11/lib/python3.8/subprocess.py", line 858, in init self._execute_child(args, executable, preexec_fn, close_fds, File "/home/kinshuk/anaconda3/envs/MolTran_CUDA11/lib/python3.8/subprocess.py", line 1720, in _execute_child raise child_exception_type(errno_num, err_msg, err_filename) FileNotFoundError: [Errno 2] No such file or directory: 'Cuda 11 install/bin/nvcc' error: subprocess-exited-with-error

× Preparing metadata (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip. full command: /home/kinshuk/anaconda3/envs/MolTran_CUDA11/bin/python /home/kinshuk/anaconda3/envs/MolTran_CUDA11/lib/python3.8/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py prepare_metadata_for_build_wheel /tmp/tmp2ejhaygo cwd: /media/kinshuk/My Book/molformer/notebooks/pretrained_molformer/apex Preparing metadata (pyproject.toml) ... error error: metadata-generation-failed

× Encountered error while generating package metadata. ╰─> See above for output.

note: This is an issue with the package mentioned above, not pip. hint: See above for details.

Environment

CUDA: nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2024 NVIDIA Corporation Built on Thu_Mar_28_02:18:24_PDT_2024 Cuda compilation tools, release 12.4, V12.4.131 Build cuda_12.4.r12.4/compiler.34097967_0

OS: Ubuntu 22.04.4 LTS (x86_64)

Python version: 3.8 (64-bit runtime)

PyTorch version: 1.7.1

vita133 commented 1 week ago

It have worked for me

https://github.com/NVIDIA/apex/issues/990#issuecomment-1315079899

NalinMalla commented 3 days ago

It have worked for me

#990 (comment)

Thanks, it worked for me too.