xiuqhou / Relation-DETR

[ECCV2024 Oral] Official implementation of the paper "Relation DETR: Exploring Explicit Position Relation Prior for Object Detection"
Apache License 2.0
135 stars 11 forks source link

MultiScale deformable attention #3

Closed Cuviews closed 4 months ago

Cuviews commented 4 months ago

Question

您好,想问一下relation detr 和 salient detr的架构是您基于什么搭建的,mmdet,detrex这些平台吗,我一直都是跟dino官方那种架构比较熟悉,他们都是要预先编译那个包,但是我跑你的代码的时候提示 UserWarning: Failed to load MultiScaleDeformableAttention C++ extension: Ninja is required to load C++ extensions warnings.warn(f"Failed to load MultiScaleDeformableAttention C++ extension: {e}") 但是正常运行,这个是我环境的问题,还是说会不使用deformable attention训练

补充信息

No response

Cuviews commented 4 months ago

单卡没错,多卡这个错RuntimeError: No backend type associated with device type cpu

xiuqhou commented 4 months ago

我们的代码是基于原生的pytorch从头写的,没有基于mmdet或detrex的detectron2等检测框架,主要是为了方便维护和扩展。mmdet和detrex使用的是setup的方式预先编译cuda算子,我们使用的jit方式,会在第一次运行的时候自动编译,使用会更方便些,其他没区别。

这里的提示是编译失败了,这种情况下代码会使用纯pytorch实现的MultiscaleDeformableAttention来替代,所以也可以运行,但速度会慢一些。看报错信息是没有安装ninja库导致无法编译算子,可以尝试pip install ninja再运行代码看看 另外windows系统对编译自定义算子的支持不好,请尽量在linux上运行代码。

麻烦您提供一下完整的报错信息,方便我进一步排查问题~

Cuviews commented 4 months ago

非常感谢大佬的回答!!!能帮我看下这个错误吗,多卡训练会出现的 Traceback (most recent call last): File "main.py", line 222, in train() File "main.py", line 195, in train train_one_epoch_acc( File "/mnt/disk1/guo/Relation-DETR-main/util/engine.py", line 46, in train_one_epoch_acc loss_dict = model(data_batch) File "/home/guo/anaconda3/envs/relation/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(args, kwargs) File "/home/guo/anaconda3/envs/relation/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, *kwargs) File "/home/guo/anaconda3/envs/relation/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 1515, in forward inputs, kwargs = self._pre_forward(inputs, kwargs) File "/home/guo/anaconda3/envs/relation/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 1416, in _pre_forward self._sync_buffers() File "/home/guo/anaconda3/envs/relation/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 2041, in _sync_buffers self._sync_module_buffers(authoritative_rank) File "/home/guo/anaconda3/envs/relation/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 2045, in _sync_module_buffers self._default_broadcast_coalesced(authoritative_rank=authoritative_rank) File "/home/guo/anaconda3/envs/relation/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 2066, in _default_broadcast_coalesced self._distributed_broadcast_coalesced(bufs, bucket_size, authoritative_rank) File "/home/guo/anaconda3/envs/relation/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 1982, in _distributed_broadcast_coalesced dist._broadcast_coalesced( RuntimeError: No backend type associated with device type cpu Traceback (most recent call last): File "main.py", line 222, in train() File "main.py", line 195, in train train_one_epoch_acc( File "/mnt/disk1/guo/Relation-DETR-main/util/engine.py", line 46, in train_one_epoch_acc loss_dict = model(data_batch) File "/home/guo/anaconda3/envs/relation/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(args, kwargs) File "/home/guo/anaconda3/envs/relation/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, *kwargs) File "/home/guo/anaconda3/envs/relation/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 1515, in forward inputs, kwargs = self._pre_forward(inputs, kwargs) File "/home/guo/anaconda3/envs/relation/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 1416, in _pre_forward self._sync_buffers() File "/home/guo/anaconda3/envs/relation/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 2041, in _sync_buffers self._sync_module_buffers(authoritative_rank) File "/home/guo/anaconda3/envs/relation/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 2045, in _sync_module_buffers self._default_broadcast_coalesced(authoritative_rank=authoritative_rank) File "/home/guo/anaconda3/envs/relation/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 2066, in _default_broadcast_coalesced self._distributed_broadcast_coalesced(bufs, bucket_size, authoritative_rank) File "/home/guo/anaconda3/envs/relation/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 1982, in _distributed_broadcast_coalesced dist._broadcast_coalesced( RuntimeError: No backend type associated with device type cpu [2024-07-19 16:07:47,671] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 349288 closing signal SIGTERM [2024-07-19 16:07:47,745] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: 1) local_rank: 1 (pid: 349289) of binary: /home/guo/anaconda3/envs/relation/bin/python3.8 Traceback (most recent call last): File "/home/guo/anaconda3/envs/relation/bin/accelerate", line 8, in sys.exit(main()) File "/home/guo/anaconda3/envs/relation/lib/python3.8/site-packages/accelerate/commands/accelerate_cli.py", line 48, in main args.func(args) File "/home/guo/anaconda3/envs/relation/lib/python3.8/site-packages/accelerate/commands/launch.py", line 1088, in launch_command multi_gpu_launcher(args) File "/home/guo/anaconda3/envs/relation/lib/python3.8/site-packages/accelerate/commands/launch.py", line 733, in multi_gpu_launcher distrib_run.run(args) File "/home/guo/anaconda3/envs/relation/lib/python3.8/site-packages/torch/distributed/run.py", line 797, in run elastic_launch( File "/home/guo/anaconda3/envs/relation/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 134, in call return launch_agent(self._config, self._entrypoint, list(args)) File "/home/guo/anaconda3/envs/relation/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 264, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

main.py FAILED

Failures:

------------------------------------------------------------ Root Cause (first observed failure): [0]: time : 2024-07-19_16:07:47 host : abc123-Super-Server rank : 1 (local_rank: 1) exitcode : 1 (pid: 349289) error_file: traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html ============================================================
xiuqhou commented 4 months ago

@Cuviews 麻烦提供一下pytorch、torchvision等库的版本,我用多卡运行试一试,看能不能复现这个问题

Cuviews commented 4 months ago

absl-py 2.1.0 accelerate 0.32.1 albucore 0.0.12 albumentations 1.4.11 annotated-types 0.7.0 antlr4-python3-runtime 4.9.3 astunparse 1.6.3 cachetools 5.4.0 certifi 2024.7.4 charset-normalizer 3.3.2 contourpy 1.1.1 cycler 0.12.1 eval_type_backport 0.2.0 filelock 3.15.4 fonttools 4.53.1 fsspec 2024.6.1 fvcore 0.1.5.post20221221 google-auth 2.32.0 google-auth-oauthlib 1.0.0 grpcio 1.65.1 huggingface-hub 0.24.0 idna 3.7 imageio 2.34.2 importlib_metadata 8.0.0 importlib_resources 6.4.0 iopath 0.1.10 Jinja2 3.1.4 joblib 1.4.2 kiwisolver 1.4.5 lazy_loader 0.4 Markdown 3.6 MarkupSafe 2.1.5 matplotlib 3.7.5 mpmath 1.3.0 networkx 3.1 ninja 1.11.1.1 numpy 1.24.4 oauthlib 3.2.2 omegaconf 2.3.0 opencv-python-headless 4.10.0.84 packaging 24.1 pillow 10.4.0 pip 22.3.1 platformdirs 4.2.2 portalocker 2.10.1 protobuf 3.19.0 psutil 6.0.0 pyasn1 0.6.0 pyasn1_modules 0.4.0 pycocotools 2.0.7 pydantic 2.8.2 pydantic_core 2.20.1 pyparsing 3.1.2 python-dateutil 2.9.0.post0 PyWavelets 1.4.1 PyYAML 6.0.1 requests 2.32.3 requests-oauthlib 2.0.0 rsa 4.9 safetensors 0.4.3 scikit-image 0.21.0 scikit-learn 1.3.2 scipy 1.10.1 setuptools 65.5.1 six 1.16.0 sympy 1.13.0 tabulate 0.9.0 tensorboard 2.14.0 tensorboard-data-server 0.7.2 termcolor 2.4.0 terminaltables 3.1.10 threadpoolctl 3.5.0 tifffile 2023.7.10 tomli 2.0.1 torch 2.1.0+cu121 torchvision 0.16.0+cu121 tqdm 4.66.4 triton 2.1.0 typing_extensions 4.12.2 urllib3 2.2.2 Werkzeug 3.0.3 wheel 0.38.4 yacs 0.1.8 yapf 0.40.2 zipp 3.19.2

xiuqhou commented 4 months ago

我按照你的版本安装了环境,还是没能复现出问题,我两卡训练是正常的。在根据报错信息查找相关资料后,我推测可能是虽然指定了多卡训练,但实际用的还是CPU设备。麻烦检查一下电脑显卡是否正常,CUDA_VISIBLE_DEVICES是否指定正确,以及torch.cuda.is_available()是否为True

如果以上都没问题,建议你可以试试能否用相同的环境跑其他代码,或者按照README.md重新安装一个新的conda环境再跑我们的代码看看。