lidongzh / TorchFWI

Elastic Full-Waveform Inversion Integrated with PyTorch
MIT License
66 stars 15 forks source link

File "/home/ev/miniconda3/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1561, in _get_cuda_arch_flags arch_list[-1] += '+PTX' IndexError: list index out of range #2

Open linfengyu77 opened 3 years ago

linfengyu77 commented 3 years ago

Errors below occurred when runing "python3 main.py --generate_data" in /TorchFWI/src: No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda-11' Detected CUDA files, patching ldflags Emitting ninja build file /home/ev/TorchFWI/Ops/FWI/Src/build/build.ninja... Traceback (most recent call last): File "main.py", line 7, in from FWI_ops import * File "../Ops/FWI/FWI_ops.py", line 27, in fwi_ops = load_fwi(path) File "../Ops/FWI/FWI_ops.py", line 16, in load_fwi fwi = load(name="fwi", File "/home/ev/miniconda3/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1079, in load return _jit_compile( File "/home/ev/miniconda3/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1292, in _jit_compile _write_ninja_file_and_build_library( File "/home/ev/miniconda3/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1391, in _write_ninja_file_and_build_library _write_ninja_file_to_build_library( File "/home/ev/miniconda3/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1782, in _write_ninja_file_to_build_library cuda_flags = common_cflags + COMMON_NVCC_FLAGS + _get_cuda_arch_flags() File "/home/ev/miniconda3/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1561, in _get_cuda_arch_flags arch_list[-1] += '+PTX' IndexError: list index out of range

OS: Ubuntu20.04 subsystem under window10 Implemetn environment: PyTorch 1.8, python 3.8, CUDA 11.1, cudnn 11.3

lidongzh commented 3 years ago

Errors below occurred when runing "python3 main.py --generate_data" in /TorchFWI/src: No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda-11' Detected CUDA files, patching ldflags Emitting ninja build file /home/ev/TorchFWI/Ops/FWI/Src/build/build.ninja... Traceback (most recent call last): File "main.py", line 7, in from FWI_ops import * File "../Ops/FWI/FWI_ops.py", line 27, in fwi_ops = load_fwi(path) File "../Ops/FWI/FWI_ops.py", line 16, in load_fwi fwi = load(name="fwi", File "/home/ev/miniconda3/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1079, in load return _jit_compile( File "/home/ev/miniconda3/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1292, in _jit_compile _write_ninja_file_and_build_library( File "/home/ev/miniconda3/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1391, in _write_ninja_file_and_build_library _write_ninja_file_to_build_library( File "/home/ev/miniconda3/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1782, in _write_ninja_file_to_build_library cuda_flags = common_cflags + COMMON_NVCC_FLAGS + _get_cuda_arch_flags() File "/home/ev/miniconda3/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1561, in _get_cuda_arch_flags arch_list[-1] += '+PTX' IndexError: list index out of range

OS: Ubuntu20.04 subsystem under window10 Implemetn environment: PyTorch 1.8, python 3.8, CUDA 11.1, cudnn 11.3

Hi, It is required that the CUDA lib used to compile a custom op should be of the same version as the one used to install your PyTorch. Could you quickly check it by running "print(torch.version.cuda)" to make sure? Also, I am not sure how PyTorch works in a Windows Ubuntu subsystem. You may not be able to use CUDA properly... Thank you!

linfengyu77 commented 3 years ago

Thank you for your reply. I got "11.1" after running "print(torch.version.cuda)". This error may be caused by the subsystem, I will try to use Linux system instead of a subsystem.