open-mmlab / mmdetection

OpenMMLab Detection Toolbox and Benchmark
https://mmdetection.readthedocs.io
Apache License 2.0
28.5k stars 9.28k forks source link

Gettiong error "Torch not compiled with CUDA enabled". Need to run with CUDA on my RTX 5000 laptop. #11381

Open ChintanShahDS opened 5 months ago

ChintanShahDS commented 5 months ago

Getting error after mmdetection installation and running the first commands for inference. It mentions pytorch not compiled with CUDA but seems pytorch is with CUDA. I want to use my RTX 5000 that I have on my laptop and so do not want to run without GPU. Request help to fix the issue.

Running command python demo/image_demo.py images/animals.png configs/mm_grounding_dino/grounding_dino_swin-t_pretrain_obj365.py --weights ../models/grounding-dino/grounding_dino_swin-t_pretrain_obj365_goldg_grit9m_v3det_20231204_095047-b448804b.pth --texts "zebra. giraffe" -c

Getting error: Loads checkpoint by local backend from path: ../models/grounding-dino/grounding_dino_swin-t_pretrain_obj365_goldg_grit9m_v3det_20231204_095047-b448804b.pth The model and loaded state dict do not match exactly

unexpected key in source state_dict: language_model.language_backbone.body.model.embeddings.position_ids

Traceback (most recent call last): File "demo/image_demo.py", line 192, in main() File "demo/image_demo.py", line 179, in main inferencer = DetInferencer(*init_args) File "d:\chintan\workspace\imagetotext\mmdetection\mmdet\apis\det_inferencer.py", line 99, in init super().init( File "D:\Software\Anaconda3\envs\openmmlab\lib\site-packages\mmengine\infer\infer.py", line 180, in init self.model = self._init_model(cfg, weights, device) # type: ignore File "D:\Software\Anaconda3\envs\openmmlab\lib\site-packages\mmengine\infer\infer.py", line 486, in _init_model model.to(device) File "D:\Software\Anaconda3\envs\openmmlab\lib\site-packages\mmengine\model\base_model\base_model.py", line 208, in to return super().to(args, **kwargs) File "D:\Software\Anaconda3\envs\openmmlab\lib\site-packages\torch\nn\modules\module.py", line 1160, in to return self._apply(convert) File "D:\Software\Anaconda3\envs\openmmlab\lib\site-packages\torch\nn\modules\module.py", line 810, in _apply module._apply(fn) File "D:\Software\Anaconda3\envs\openmmlab\lib\site-packages\torch\nn\modules\module.py", line 857, in _apply self._buffers[key] = fn(buf) File "D:\Software\Anaconda3\envs\openmmlab\lib\site-packages\torch\nn\modules\module.py", line 1158, in convert return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking) File "D:\Software\Anaconda3\envs\openmmlab\lib\site-packages\torch\cuda__init__.py", line 289, in _lazy_init raise AssertionError("Torch not compiled with CUDA enabled") AssertionError: Torch not compiled with CUDA enabled

Output of nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2023 NVIDIA Corporation Built on Wed_Feb__8_05:53:42_Coordinated_Universal_Time_2023 Cuda compilation tools, release 12.1, V12.1.66 Build cuda_12.1.r12.1/compiler.32415258_0

Output of python mmdet/utils/collect_env.py sys.platform: win32 Python: 3.8.18 | packaged by conda-forge | (default, Dec 23 2023, 17:17:17) [MSC v.1929 64 bit (AMD64)] CUDA available: False numpy_random_seed: 2147483648 MSVC: Microsoft (R) C/C++ Optimizing Compiler Version 19.38.33133 for x64 GCC: n/a PyTorch: 2.1.2 PyTorch compiling details: PyTorch built with:

TorchVision: 0.16.2 OpenCV: 4.9.0 MMEngine: 0.10.2 MMDetection: 3.3.0+44ebd17

JackWei01 commented 3 months ago

I follow the offical doc to install dependency.But I find torch.cuda.is_available() returns False .... So I use pip instead of conda to install the dependency of pytorch. And I try torch.cuda.is_available() is True.And the inference demo with GPU return new error like :"RuntimeError: nms_impl: implementation for device cuda:0 not found."

JackWei01 commented 3 months ago

I follow the offical doc to install dependency.But I find torch.cuda.is_available() returns False .... So I use pip instead of conda to install the dependency of pytorch. And I try torch.cuda.is_available() is True.And the inference demo with GPU return new error like :"RuntimeError: nms_impl: implementation for device cuda:0 not found."

版本不支持 建议使用pytorch 2.1 参考:https://github.com/open-mmlab/mmdetection/issues/11531