SHI-Labs / Neighborhood-Attention-Transformer

Neighborhood Attention Transformer, arxiv 2022 / CVPR 2023. Dilated Neighborhood Attention Transformer, arxiv 2022
MIT License
1.05k stars 86 forks source link

hello , I have already installed CUDA on requirement ,why have CUDA extension error ? #20

Closed Wisdom2wisdom closed 2 years ago

Wisdom2wisdom commented 2 years ago

Traceback (most recent call last): File "E:\executable_code\Neighborhood-Attention-Transformer-main\detection\cuda\natten.py", line 10, in 'nattenav_cuda', ['cuda/nattenav_cuda.cpp', 'cuda/nattenav_cuda_kernel.cu'], verbose=False) File "C:\D_installation_packet\Anaconda\installion_package\envs\NAT\lib\site-packages\torch\utils\cpp_extension.py", line 1156, in load keep_intermediates=keep_intermediates) File "C:\D_installation_packet\Anaconda\installion_package\envs\NAT\lib\site-packages\torch\utils\cpp_extension.py", line 1334, in _jit_compile is_standalone=is_standalone, File "C:\D_installation_packet\Anaconda\installion_package\envs\NAT\lib\site-packages\torch\utils_cpp_extension_versioner.py", line 45, in bump_version_if_changed hash_value = hash_source_files(hash_value, source_files) File "C:\D_installation_packet\Anaconda\installion_package\envs\NAT\lib\site-packages\torch\utils_cpp_extension_versioner.py", line 15, in hash_source_files with open(filename) as file: FileNotFoundError: [Errno 2] No such file or directory: 'cuda/nattenav_cuda.cpp'

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "E:\executable_code\Neighborhood-Attention-Transformer-main\detection\cuda\natten.py", line 15, in import nattenav_cuda ModuleNotFoundError: No module named 'nattenav_cuda'

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "E:/executable_code/Neighborhood-Attention-Transformer-main/detection/cuda/gradcheck.py", line 1, in from natten import NATTENAVFunction, NATTENQKRPBFunction File "E:\executable_code\Neighborhood-Attention-Transformer-main\detection\cuda\natten.py", line 18, in raise RuntimeError("Could not load NATTEN CUDA extension. " + RuntimeError: Could not load NATTEN CUDA extension. Please make sure your device has CUDA, the CUDA toolkit for PyTorch is installed, and that you've compiled NATTEN correctly. No CUDA runtime is found, using CUDA_HOME='C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.3'

alihassanijr commented 2 years ago

Hello and thank you for your interest. It appears that you're trying to load natten directly from the cuda directory. Could you try importing it from the parent directory? As in: import cuda.natten from the parent directory?

Wisdom2wisdom commented 2 years ago

Hello and thank you for your interest. It appears that you're trying to load natten directly from the cuda directory. Could you try importing it from the parent directory? As in: import cuda.natten from the parent directory?

hello,Thank you for your reply,extension error has been resolved。

when i train the train.py , A new error is currently encountered as follows。

C:\D_installation_packet\Anaconda\installion_package\envs\All\lib\site-packages\torch\utils\cpp_extension.py:304: UserWarning: Error checking compiler version for cl: [WinError 2] 系统找不到指定的文件。 warnings.warn(f'Error checking compiler version for {compiler}: {error}') usage: train.py [-h] [--work-dir WORK_DIR] [--resume-from RESUME_FROM] [--no-validate] [--gpus GPUS | --gpu-ids GPU_IDS [GPU_IDS ...]] [--seed SEED] [--deterministic] [--options OPTIONS [OPTIONS ...]] [--cfg-options CFG_OPTIONS [CFG_OPTIONS ...]] [--launcher {none,pytorch,slurm,mpi}] [--local_rank LOCAL_RANK] config train.py: error: the following arguments are required: config

alihassanijr commented 2 years ago

Please follow the instructions in the README file. You need to pass the correct config file for detection and segmentation.

alihassanijr commented 2 years ago

Closing this due to inactivity. If you still have questions feel free to open it back up.

liutinglt commented 2 years ago

I have the same problem, after deleting ~/.cache/torch_extensions, it is solved.