Open QianYing-LYG opened 5 months ago
我搜索了相关资料,报错原因可能是因为windows系统上pytorch DataLoader在多进程加载数据时不支持使用lambda函数,可以将进程数量设置为0来取消多进程加载数据,只需要在命令中加上--workers
参数。
python tools/visualize_datasets.py --coco-img data/coco/val2017 --coco-ann data/coco/annotations/instances_val2017.json --show-dir /tools/visualize_dataset --workers 0
强烈建议您使用linux系统来运行代码,windows上会出现各种各样的问题。
我调整了报错部分的代码,现在已经支持在Windows系统上使用多进程进行可视化,您可以下载本仓库最新版本的代码来运行。
感谢您帮助本仓库发现和修复bug,如果有问题欢迎再提issue~
感谢回复,还是要在linux主机运行
loading annotations into memory...
Done (t=0.61s)
creating index...
index created!
0%| | 0/5000 [00:00<?, ?it/s]
Traceback (most recent call last):
File "tools/visualize_datasets.py", line 96, in
看这个报错行数和仓库最新版不一致,我修复上个bug的时候进行了多次修改,其中有一个就是修复TypeError: len() of unsized object
,您下载的应该不是最新的代码,请下载最新的代码试一试。
确实,我试试,因为最近刚弄了linux服务器,代码还是老内容没变,我以为可以直接使用的,windows的确实可以使用了,但是训练失败了
File "D:\Anaconda3\envs\salience_detr\lib\multiprocessing\spawn.py", line 125, in _main
prepare(preparation_data)
File "D:\Anaconda3\envs\salience_detr\lib\multiprocessing\spawn.py", line 236, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "D:\Anaconda3\envs\salience_detr\lib\multiprocessing\spawn.py", line 287, in _fixup_main_from_path
main_content = runpy.run_path(main_path,
File "D:\Anaconda3\envs\salience_detr\lib\runpy.py", line 265, in run_path
return _run_module_code(code, init_globals, run_name,
File "D:\Anaconda3\envs\salience_detr\lib\runpy.py", line 97, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "D:\Anaconda3\envs\salience_detr\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "D:\GitGit\Salience-DETR\main.py", line 8, in
这个页面文件太小
的报错应该是电脑内存不够,可以增加内存条或者增大虚拟内存来解决
好的
还有一个,训练语句CUDA_VISIBLE_DEVICES=0 : 无法将“CUDA_VISIBLE_DEVICES=0”项识别为 cmdlet、函数、脚本文件或可运行程序的名称。请检查名称的拼写,如果包括路径,请确保路径正确,然后再试 一次。 所在位置 行:1 字符: 1
+ CategoryInfo : ObjectNotFound: (CUDA_VISIBLE_DEVICES=0:String) [], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException
我在main.py中加入os.environ["CUDA_VISIBLE_DEVICES"] = "0"
然后执行语句accelerate launch main.py可以吗
这里CUDA_VISIBLE_DEVICES=0是linux上临时设置环境变量CUDA_VISIBLE_DEVICES
为0,设置后运行命令就只会使用0号GPU。但windows上不能这样设置环境变量。
可以在main.py中用您说的os设置环境变量os.environ["CUDA_VISIBLE_DEVICES"] = "0"
,但要注意必须在导入accelerate和pytorch之前。
当然也可以用windows上设置环境变量的命令,打开命令行输入:
set CUDA_VISIBLE_DEVICES=0
# 然后运行命令
请直接给我邮箱xiuqhou@stu.xjtu.edu.cn
发个联系方式吧,我加你
PS D:\GitGit\Salience-DETR> python tools/visualize_datasets.py --coco-img data/coco/val2017 --coco-ann data/coco/annotations/instances_val2017.json --show-dir /tools/visualize_dataset loading annotations into memory... Done (t=0.74s) creating index... index created! 0%| | 0/5000 [00:02<?, ?it/s] Traceback (most recent call last): File "tools/visualize_datasets.py", line 96, in
visualize_datasets()
File "tools/visualize_datasets.py", line 72, in visualize_datasets
visualize_coco_bounding_boxes(
File "D:\GitGit\Salience-DETR\util\visualize.py", line 243, in visualize_coco_boundingboxes
[None for in tqdm(dataloader)]
File "D:\GitGit\Salience-DETR\util\visualize.py", line 243, in
[None for in tqdm(data_loader)]
File "D:\Anaconda3\envs\salience_detr\lib\site-packages\tqdm\std.py", line 1181, in iter
for obj in iterable:
File "D:\Anaconda3\envs\salience_detr\lib\site-packages\torch\utils\data\dataloader.py", line 368, in iter.'
Traceback (most recent call last):
File "", line 1, in
File "D:\Anaconda3\envs\salience_detr\lib\multiprocessing\spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "D:\Anaconda3\envs\salience_detr\lib\multiprocessing\spawn.py", line 126, in _main
self = reduction.pickle.load(from_parent)
EOFError: Ran out of input
return self._get_iterator() File "D:\Anaconda3\envs\salience_detr\lib\site-packages\torch\utils\data\dataloader.py", line 314, in _get_iterator return _MultiProcessingDataLoaderIter(self) File "D:\Anaconda3\envs\salience_detr\lib\site-packages\torch\utils\data\dataloader.py", line 927, in init w.start() File "D:\Anaconda3\envs\salience_detr\lib\multiprocessing\process.py", line 121, in start self._popen = self._Popen(self) File "D:\Anaconda3\envs\salience_detr\lib\multiprocessing\context.py", line 224, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "D:\Anaconda3\envs\salience_detr\lib\multiprocessing\context.py", line 327, in _Popen return Popen(process_obj) File "D:\Anaconda3\envs\salience_detr\lib\multiprocessing\popen_spawn_win32.py", line 93, in init reduction.dump(process_obj, to_child) File "D:\Anaconda3\envs\salience_detr\lib\multiprocessing\reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) AttributeError: Can't pickle local object 'visualize_coco_bounding_boxes.