xiuqhou / Salience-DETR

[CVPR 2024] Official implementation of the paper "Salience DETR: Enhancing Detection Transformer with Hierarchical Salience Filtering Refinement"
https://arxiv.org/abs/2403.16131
Apache License 2.0
105 stars 7 forks source link

你好,验证数据集python tools/visualize_datasets.py --coco-img data/coco/val2017 --coco-ann data/coco/annotations/instances_val2017.json --show-dir /tools/visualize_dataset,报错了,求教 #10

Open QianYing-LYG opened 3 months ago

QianYing-LYG commented 3 months ago

PS D:\GitGit\Salience-DETR> python tools/visualize_datasets.py --coco-img data/coco/val2017 --coco-ann data/coco/annotations/instances_val2017.json --show-dir /tools/visualize_dataset loading annotations into memory... Done (t=0.74s) creating index... index created! 0%| | 0/5000 [00:02<?, ?it/s] Traceback (most recent call last): File "tools/visualize_datasets.py", line 96, in visualize_datasets() File "tools/visualize_datasets.py", line 72, in visualize_datasets visualize_coco_bounding_boxes( File "D:\GitGit\Salience-DETR\util\visualize.py", line 243, in visualize_coco_boundingboxes [None for in tqdm(dataloader)] File "D:\GitGit\Salience-DETR\util\visualize.py", line 243, in [None for in tqdm(data_loader)] File "D:\Anaconda3\envs\salience_detr\lib\site-packages\tqdm\std.py", line 1181, in iter for obj in iterable: File "D:\Anaconda3\envs\salience_detr\lib\site-packages\torch\utils\data\dataloader.py", line 368, in iter
return self._get_iterator() File "D:\Anaconda3\envs\salience_detr\lib\site-packages\torch\utils\data\dataloader.py", line 314, in _get_iterator return _MultiProcessingDataLoaderIter(self) File "D:\Anaconda3\envs\salience_detr\lib\site-packages\torch\utils\data\dataloader.py", line 927, in init w.start() File "D:\Anaconda3\envs\salience_detr\lib\multiprocessing\process.py", line 121, in start self._popen = self._Popen(self) File "D:\Anaconda3\envs\salience_detr\lib\multiprocessing\context.py", line 224, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "D:\Anaconda3\envs\salience_detr\lib\multiprocessing\context.py", line 327, in _Popen return Popen(process_obj) File "D:\Anaconda3\envs\salience_detr\lib\multiprocessing\popen_spawn_win32.py", line 93, in init reduction.dump(process_obj, to_child) File "D:\Anaconda3\envs\salience_detr\lib\multiprocessing\reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) AttributeError: Can't pickle local object 'visualize_coco_bounding_boxes..' Traceback (most recent call last): File "", line 1, in File "D:\Anaconda3\envs\salience_detr\lib\multiprocessing\spawn.py", line 116, in spawn_main exitcode = _main(fd, parent_sentinel) File "D:\Anaconda3\envs\salience_detr\lib\multiprocessing\spawn.py", line 126, in _main self = reduction.pickle.load(from_parent) EOFError: Ran out of input

xiuqhou commented 3 months ago

我搜索了相关资料,报错原因可能是因为windows系统上pytorch DataLoader在多进程加载数据时不支持使用lambda函数,可以将进程数量设置为0来取消多进程加载数据,只需要在命令中加上--workers参数。

python tools/visualize_datasets.py --coco-img data/coco/val2017 --coco-ann data/coco/annotations/instances_val2017.json --show-dir /tools/visualize_dataset --workers 0

强烈建议您使用linux系统来运行代码,windows上会出现各种各样的问题。

xiuqhou commented 3 months ago

我调整了报错部分的代码,现在已经支持在Windows系统上使用多进程进行可视化,您可以下载本仓库最新版本的代码来运行。

感谢您帮助本仓库发现和修复bug,如果有问题欢迎再提issue~

QianYing-LYG commented 3 months ago

感谢回复,还是要在linux主机运行

QianYing-LYG commented 3 months ago

loading annotations into memory... Done (t=0.61s) creating index... index created! 0%| | 0/5000 [00:00<?, ?it/s] Traceback (most recent call last): File "tools/visualize_datasets.py", line 96, in visualize_datasets() File "tools/visualize_datasets.py", line 72, in visualize_datasets visualize_coco_bounding_boxes( File "/home/c101/cv_code/Salience-DETR/util/visualize.py", line 244, in visualize_coco_boundingboxes [None for in tqdm(data_loader)] File "/home/c101/cvcode/Salience-DETR/util/visualize.py", line 244, in [None for in tqdm(data_loader)] File "/home/c101/.local/lib/python3.8/site-packages/tqdm/std.py", line 1181, in iter for obj in iterable: File "/home/c101/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 530, in next data = self._next_data() File "/home/c101/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 570, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/home/c101/.local/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 52, in fetch return self.collate_fn(data) File "/home/c101/cv_code/Salience-DETR/util/visualize.py", line 243, in data_loader.collate_fn = lambda x: visualize_single_in_coco(*x[0]) File "/home/c101/cv_code/Salience-DETR/util/visualize.py", line 225, in visualize_single_in_coco image = plot_bounding_boxes_on_image_cv2( File "/home/c101/cv_code/Salience-DETR/util/visualize.py", line 108, in plot_bounding_boxes_on_image_cv2 assert scores is None or len(scores) == len(labels), "#scores and #labels must be equal" TypeError: len() of unsized object 麻了,新的错误,在ubuntu运行python tools/visualize_datasets.py --coco-img data/coco/val2017 --coco-ann data/coco/annotations/instances_val2017.json --show-dir /tools/visualize_dataset --workers 0

xiuqhou commented 3 months ago

看这个报错行数和仓库最新版不一致,我修复上个bug的时候进行了多次修改,其中有一个就是修复TypeError: len() of unsized object,您下载的应该不是最新的代码,请下载最新的代码试一试。

QianYing-LYG commented 3 months ago

确实,我试试,因为最近刚弄了linux服务器,代码还是老内容没变,我以为可以直接使用的,windows的确实可以使用了,但是训练失败了

QianYing-LYG commented 3 months ago

File "D:\Anaconda3\envs\salience_detr\lib\multiprocessing\spawn.py", line 125, in _main prepare(preparation_data) File "D:\Anaconda3\envs\salience_detr\lib\multiprocessing\spawn.py", line 236, in prepare _fixup_main_from_path(data['init_main_from_path']) File "D:\Anaconda3\envs\salience_detr\lib\multiprocessing\spawn.py", line 287, in _fixup_main_from_path main_content = runpy.run_path(main_path, File "D:\Anaconda3\envs\salience_detr\lib\runpy.py", line 265, in run_path return _run_module_code(code, init_globals, run_name, File "D:\Anaconda3\envs\salience_detr\lib\runpy.py", line 97, in _run_module_code _run_code(code, mod_globals, init_globals, File "D:\Anaconda3\envs\salience_detr\lib\runpy.py", line 87, in _run_code exec(code, run_globals) File "D:\GitGit\Salience-DETR\main.py", line 8, in import accelerate File "D:\Anaconda3\envs\salience_detr\lib\site-packages\accelerate__init.py", line 16, in from .accelerator import Accelerator File "D:\Anaconda3\envs\salience_detr\lib\site-packages\accelerate\accelerator.py", line 32, in import torch File "D:\Anaconda3\envs\salience_detr\lib\site-packages\torch\init__.py", line 126, in raise err OSError: [WinError 1455] 页面文件太小,无法完成操作。 Error loading "D:\Anaconda3\envs\salience_detr\lib\site-packages\torch\lib\cudnn_cnn_train64_8.dll" or one of its dependencies. 训练报错如上,请问是否是我电脑硬件问题

xiuqhou commented 3 months ago

这个页面文件太小的报错应该是电脑内存不够,可以增加内存条或者增大虚拟内存来解决

QianYing-LYG commented 3 months ago

好的

QianYing-LYG commented 3 months ago

还有一个,训练语句CUDA_VISIBLE_DEVICES=0 : 无法将“CUDA_VISIBLE_DEVICES=0”项识别为 cmdlet、函数、脚本文件或可运行程序的名称。请检查名称的拼写,如果包括路径,请确保路径正确,然后再试 一次。 所在位置 行:1 字符: 1

xiuqhou commented 3 months ago

这里CUDA_VISIBLE_DEVICES=0是linux上临时设置环境变量CUDA_VISIBLE_DEVICES为0,设置后运行命令就只会使用0号GPU。但windows上不能这样设置环境变量。

可以在main.py中用您说的os设置环境变量os.environ["CUDA_VISIBLE_DEVICES"] = "0",但要注意必须在导入accelerate和pytorch之前。

当然也可以用windows上设置环境变量的命令,打开命令行输入:

set CUDA_VISIBLE_DEVICES=0
# 然后运行命令

请直接给我邮箱xiuqhou@stu.xjtu.edu.cn发个联系方式吧,我加你