NVIDIA / apex

A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch
BSD 3-Clause "New" or "Revised" License
8.42k stars 1.4k forks source link

Install set.up #1822

Open Maritime-Moon opened 4 months ago

Maritime-Moon commented 4 months ago

Warning: Torch did not find available GPUs on this system. If your intention is to cross-compile, this is not an error. By default, Apex will cross-compile for Pascal (compute capabilities 6.0, 6.1, 6.2), Volta (compute capability 7.0), Turing (compute capability 7.5), and, if the CUDA version is >= 11.0, Ampere (compute capability 8.0). If you wish to cross-compile for a single specific architecture, export TORCH_CUDA_ARCH_LIST="compute capability" before running setup.py.

torch.version = 2.4.0

Traceback (most recent call last): File "E:\anaconda\envs\pytorch\Lib\site-packages\apex\setup.py", line 137, in _, bare_metal_version = get_cuda_bare_metal_version(CUDA_HOME) File "E:\anaconda\envs\pytorch\Lib\site-packages\apex\setup.py", line 24, in get_cuda_bare_metal_version raw_output = subprocess.check_output([cuda_dir + "/bin/nvcc", "-V"], universal_newlines=True) TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'