open-mmlab / mmrotate

OpenMMLab Rotated Object Detection Toolbox and Benchmark
https://mmrotate.readthedocs.io/en/latest/
Apache License 2.0
1.88k stars 556 forks source link

STD模型 训练HRSCD数据集报错:HRSCDataset: [Errno 2] No such file or directory: 'data/HRSC2016/FullDataSet/AllImages/data/HRSC2016/FullDataSet/Annotations/100000001.xml' #1004

Open Joey-He opened 8 months ago

Joey-He commented 8 months ago

Prerequisite

Task

I'm using the official example scripts/configs for the officially supported tasks/models/datasets.

Branch

master branch https://github.com/open-mmlab/mmrotate

Environment

sys.platform: win32 Python: 3.8.18 (default, Sep 11 2023, 13:39:12) [MSC v.1916 64 bit (AMD64)] CUDA available: True GPU 0: NVIDIA GeForce RTX 4060 Laptop GPU CUDA_HOME: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8 NVCC: Cuda compilation tools, release 11.8, V11.8.89 MSVC: 用于 x64 的 Microsoft (R) C/C++ 优化编译器 19.37.32822 版 GCC: n/a PyTorch: 1.11.0 PyTorch compiling details: PyTorch built with:

TorchVision: 0.12.0 OpenCV: 4.9.0 MMCV: 1.6.0 MMCV Compiler: MSVC 192930140 MMCV CUDA Compiler: 11.3 MMRotate: 0.3.4+9ea1aee

Reproduces the problem - code sample

train=dict( type=dataset_type, classwise=False, ann_file=data_root + 'ImageSets/trainval.txt', ann_subdir=data_root + 'FullDataSet/Annotations/', img_subdir=data_root + 'FullDataSet/AllImages/', img_prefix=data_root + 'FullDataSet/AllImages/', pipeline=train_pipeline),

Reproduces the problem - command or script

python ./tools/train.py ./configs/rotated_imted/hrsc/vit/rotated_imted_oriented_rcnn_vit_base_3x_hrsc_rr_le90_stdc_xyawh321v.py

Reproduces the problem - error message

2024-03-11 16:20:45,206 - mmrotate - INFO - Environment info:

sys.platform: win32 Python: 3.8.18 (default, Sep 11 2023, 13:39:12) [MSC v.1916 64 bit (AMD64)] CUDA available: True GPU 0: NVIDIA GeForce RTX 4060 Laptop GPU CUDA_HOME: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8 NVCC: Cuda compilation tools, release 11.8, V11.8.89 MSVC: 用于 x64 的 Microsoft (R) C/C++ 优化编译器 19.37.32822 版 GCC: n/a PyTorch: 1.11.0 PyTorch compiling details: PyTorch built with:

TorchVision: 0.12.0 OpenCV: 4.9.0 MMCV: 1.6.0 MMCV Compiler: MSVC 192930140 MMCV CUDA Compiler: 11.3 MMRotate: 0.3.4+9ea1aee

2024-03-11 16:20:45,463 - mmrotate - INFO - Distributed training: False 2024-03-11 16:20:45,665 - mmrotate - INFO - Config: dataset_type = 'HRSCDataset' data_root = 'data/HRSC2016/' img_norm_cfg = dict( mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) train_pipeline = [ dict(type='LoadImageFromFile'), dict(type='LoadAnnotations', with_bbox=True), dict(type='RResize', img_scale=(800, 800)), dict( type='RRandomFlip', flip_ratio=[0.25, 0.25, 0.25], direction=['horizontal', 'vertical', 'diagonal'], version='le90'), dict( type='PolyRandomRotate', rotate_ratio=0.5, angles_range=180, auto_bound=False, version='le90'), dict( type='Normalize', mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True), dict(type='Pad', size_divisor=32), dict(type='DefaultFormatBundle'), dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']) ] test_pipeline = [ dict(type='LoadImageFromFile'), dict( type='MultiScaleFlipAug', img_scale=(800, 800), flip=False, transforms=[ dict(type='RResize'), dict( type='Normalize', mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True), dict(type='Pad', size_divisor=32), dict(type='DefaultFormatBundle'), dict(type='Collect', keys=['img']) ]) ] data = dict( samples_per_gpu=1, workers_per_gpu=8, train=dict( type='HRSCDataset', classwise=False, ann_file='data/HRSC2016/ImageSets/trainval.txt', ann_subdir='data/HRSC2016/FullDataSet/Annotations/', img_subdir='data/HRSC2016/FullDataSet/AllImages/', img_prefix='data/HRSC2016/FullDataSet/AllImages/', pipeline=[ dict(type='LoadImageFromFile'), dict(type='LoadAnnotations', with_bbox=True), dict(type='RResize', img_scale=(800, 800)), dict( type='RRandomFlip', flip_ratio=[0.25, 0.25, 0.25], direction=['horizontal', 'vertical', 'diagonal'], version='le90'), dict( type='PolyRandomRotate', rotate_ratio=0.5, angles_range=180, auto_bound=False, version='le90'), dict( type='Normalize', mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True), dict(type='Pad', size_divisor=32), dict(type='DefaultFormatBundle'), dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']) ], version='le90'), val=dict( type='HRSCDataset', classwise=False, ann_file='data/HRSC2016/ImageSets/test.txt', ann_subdir='data/HRSC2016/FullDataSet/Annotations/', img_subdir='data/HRSC2016/FullDataSet/AllImages/', img_prefix='data/HRSC2016/FullDataSet/AllImages/', pipeline=[ dict(type='LoadImageFromFile'), dict( type='MultiScaleFlipAug', img_scale=(800, 800), flip=False, transforms=[ dict(type='RResize'), dict( type='Normalize', mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True), dict(type='Pad', size_divisor=32), dict(type='DefaultFormatBundle'), dict(type='Collect', keys=['img']) ]) ], version='le90'), test=dict( type='HRSCDataset', classwise=False, ann_file='data/HRSC2016/ImageSets/test.txt', ann_subdir='data/HRSC2016/FullDataSet/Annotations/', img_subdir='data/HRSC2016/FullDataSet/AllImages/', img_prefix='data/HRSC2016/FullDataSet/AllImages/', pipeline=[ dict(type='LoadImageFromFile'), dict( type='MultiScaleFlipAug', img_scale=(800, 800), flip=False, transforms=[ dict(type='RResize'), dict( type='Normalize', mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True), dict(type='Pad', size_divisor=32), dict(type='DefaultFormatBundle'), dict(type='Collect', keys=['img']) ]) ], version='le90')) evaluation = dict(interval=3, metric='mAP') optimizer = dict( type='AdamW', lr=0.00025, betas=(0.9, 0.999), weight_decay=0.05, constructor='LayerDecayOptimizerConstructor', paramwise_cfg=dict(num_layers=12, layer_decay_rate=0.75)) optimizer_config = dict(grad_clip=dict(max_norm=35, norm_type=2)) lr_config = dict( policy='step', warmup='linear', warmup_iters=500, warmup_ratio=0.3333333333333333, step=[24, 33]) runner = dict(type='EpochBasedRunner', max_epochs=36) checkpoint_config = dict(interval=1) log_config = dict(interval=50, hooks=[dict(type='TextLoggerHook')]) dist_params = dict(backend='nccl') log_level = 'INFO' load_from = None resume_from = None workflow = [('train', 1)] opencv_num_threads = 0 mp_start_method = 'fork' pretrained = 'data/pretrained/mae_pretrain_vit_base_full.pth' angle_version = 'le90' norm_cfg = dict(type='LN', requires_grad=True) model = dict( type='RotatedimTED', proposals_dim=6, backbone=dict( type='VisionTransformer', init_cfg=dict( type='Pretrained', checkpoint='data/pretrained/mae_pretrain_vit_base_full.pth'), img_size=224, patch_size=16, embed_dim=768, depth=12, num_heads=12, mlp_ratio=4.0, qkv_bias=True, drop_path_rate=0.2, learnable_pos_embed=True, use_checkpoint=False, with_simple_fpn=True, last_feat=True), neck=dict( type='SimpleFPN', in_channels=[768, 768, 768, 768], out_channels=256, norm_cfg=dict(type='LN', requires_grad=True), use_residual=False, num_outs=5), rpn_head=dict( type='OrientedRPNHead', in_channels=256, feat_channels=256, version='le90', anchor_generator=dict( type='AnchorGenerator', scales=[8], ratios=[0.5, 1.0, 2.0], strides=[4, 8, 16, 32, 64]), bbox_coder=dict( type='MidpointOffsetCoder', angle_range='le90', target_means=[0.0, 0.0, 0.0, 0.0, 0.0, 0.0], target_stds=[1.0, 1.0, 1.0, 1.0, 0.5, 0.5]), loss_cls=dict( type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), loss_bbox=dict( type='SmoothL1Loss', beta=0.1111111111111111, loss_weight=1.0)), roi_skip_fpn=False, with_mfm=True, roi_head=dict( type='OrientedStandardRoIHeadimTED', bbox_roi_extractor=[ dict( type='RotatedSingleRoIExtractor', roi_layer=dict( type='RoIAlignRotated', out_size=7, sample_num=2, clockwise=True), out_channels=768, featmap_strides=[4, 8, 16, 32]), dict( type='RotatedSingleRoIExtractor', roi_layer=dict( type='RoIAlignRotated', out_size=7, sample_num=2, clockwise=True), out_channels=768, featmap_strides=[16]) ], bbox_head=dict( type='RotatedMAEBBoxHeadSTDC', init_cfg=dict( type='Pretrained', checkpoint='data/pretrained/mae_pretrain_vit_base_full.pth'), use_checkpoint=False, in_channels=768, img_size=224, patch_size=16, embed_dim=512, depth=8, num_heads=16, mlp_ratio=4.0, num_classes=1, bbox_coder=dict( type='DeltaXYWHAOBBoxCoder', angle_range='le90', norm_factor=None, edge_swap=True, proj_xy=True, target_means=(0.0, 0.0, 0.0, 0.0, 0.0), target_stds=(0.1, 0.1, 0.2, 0.2, 0.1)), reg_class_agnostic=True, loss_cls=dict( type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0), dc_mode_str_list=['', '', '', 'XY', '', 'A', '', 'WH'], num_convs_list=[0, 0, 3, 3, 2, 2, 1, 1], am_mode_str_list=['', '', 'V', 'V', 'V', 'V', 'V', 'V'], rois_mode='rbbox')), train_cfg=dict( rpn=dict( assigner=dict( type='MaxIoUAssigner', pos_iou_thr=0.7, neg_iou_thr=0.3, min_pos_iou=0.3, match_low_quality=True, ignore_iof_thr=-1), sampler=dict( type='RandomSampler', num=256, pos_fraction=0.5, neg_pos_ub=-1, add_gt_as_proposals=False), allowed_border=0, pos_weight=-1, debug=False), rpn_proposal=dict( nms_pre=2000, max_per_img=2000, nms=dict(type='nms', iou_threshold=0.8), min_bbox_size=0), rcnn=dict( assigner=dict( type='MaxIoUAssigner', pos_iou_thr=0.5, neg_iou_thr=0.5, min_pos_iou=0.5, match_low_quality=False, iou_calculator=dict(type='RBboxOverlaps2D'), ignore_iof_thr=-1), sampler=dict( type='RRandomSampler', num=512, pos_fraction=0.25, neg_pos_ub=-1, add_gt_as_proposals=True), pos_weight=-1, debug=False)), test_cfg=dict( rpn=dict( nms_pre=2000, max_per_img=2000, nms=dict(type='nms', iou_threshold=0.8), min_bbox_size=0), rcnn=dict( nms_pre=2000, min_bbox_size=0, score_thr=0.05, nms=dict(iou_thr=0.1), max_per_img=2000))) fp16 = dict(loss_scale=dict(init_scale=512)) work_dir = './work_dirs\rotated_imted_oriented_rcnn_vit_base_3x_hrsc_rr_le90_stdc_xyawh321v' auto_resume = False gpu_ids = range(0, 1)

2024-03-11 16:20:45,665 - mmrotate - INFO - Set random seed to 1887641562, deterministic: False 2024-03-11 16:20:46,646 - mmdet - WARNING - The model and loaded state dict do not match exactly

unexpected key in source state_dict: mask_token, decoder_pos_embed, norm.weight, norm.bias, decoder_embed.weight, decoder_embed.bias, decoder_blocks.0.norm1.weight, decoder_blocks.0.norm1.bias, decoder _blocks.0.attn.qkv.weight, decoder_blocks.0.attn.qkv.bias, decoder_blocks.0.attn.proj.weight, decoder_blocks.0.attn.proj.bias, decoder_blocks.0.norm2.weight, decoder_blocks.0.norm2.bias, decoder_blocks .0.mlp.fc1.weight, decoder_blocks.0.mlp.fc1.bias, decoder_blocks.0.mlp.fc2.weight, decoder_blocks.0.mlp.fc2.bias, decoder_blocks.1.norm1.weight, decoder_blocks.1.norm1.bias, decoder_blocks.1.attn.qkv.w eight, decoder_blocks.1.attn.qkv.bias, decoder_blocks.1.attn.proj.weight, decoder_blocks.1.attn.proj.bias, decoder_blocks.1.norm2.weight, decoder_blocks.1.norm2.bias, decoder_blocks.1.mlp.fc1.weight, d ecoder_blocks.1.mlp.fc1.bias, decoder_blocks.1.mlp.fc2.weight, decoder_blocks.1.mlp.fc2.bias, decoder_blocks.2.norm1.weight, decoder_blocks.2.norm1.bias, decoder_blocks.2.attn.qkv.weight, decoder_block s.2.attn.qkv.bias, decoder_blocks.2.attn.proj.weight, decoder_blocks.2.attn.proj.bias, decoder_blocks.2.norm2.weight, decoder_blocks.2.norm2.bias, decoder_blocks.2.mlp.fc1.weight, decoder_blocks.2.mlp. fc1.bias, decoder_blocks.2.mlp.fc2.weight, decoder_blocks.2.mlp.fc2.bias, decoder_blocks.3.norm1.weight, decoder_blocks.3.norm1.bias, decoder_blocks.3.attn.qkv.weight, decoder_blocks.3.attn.qkv.bias, d ecoder_blocks.3.attn.proj.weight, decoder_blocks.3.attn.proj.bias, decoder_blocks.3.norm2.weight, decoder_blocks.3.norm2.bias, decoder_blocks.3.mlp.fc1.weight, decoder_blocks.3.mlp.fc1.bias, decoder_bl ocks.3.mlp.fc2.weight, decoder_blocks.3.mlp.fc2.bias, decoder_blocks.4.norm1.weight, decoder_blocks.4.norm1.bias, decoder_blocks.4.attn.qkv.weight, decoder_blocks.4.attn.qkv.bias, decoder_blocks.4.attn .proj.weight, decoder_blocks.4.attn.proj.bias, decoder_blocks.4.norm2.weight, decoder_blocks.4.norm2.bias, decoder_blocks.4.mlp.fc1.weight, decoder_blocks.4.mlp.fc1.bias, decoder_blocks.4.mlp.fc2.weigh t, decoder_blocks.4.mlp.fc2.bias, decoder_blocks.5.norm1.weight, decoder_blocks.5.norm1.bias, decoder_blocks.5.attn.qkv.weight, decoder_blocks.5.attn.qkv.bias, decoder_blocks.5.attn.proj.weight, decode r_blocks.5.attn.proj.bias, decoder_blocks.5.norm2.weight, decoder_blocks.5.norm2.bias, decoder_blocks.5.mlp.fc1.weight, decoder_blocks.5.mlp.fc1.bias, decoder_blocks.5.mlp.fc2.weight, decoder_blocks.5. mlp.fc2.bias, decoder_blocks.6.norm1.weight, decoder_blocks.6.norm1.bias, decoder_blocks.6.attn.qkv.weight, decoder_blocks.6.attn.qkv.bias, decoder_blocks.6.attn.proj.weight, decoder_blocks.6.attn.proj .bias, decoder_blocks.6.norm2.weight, decoder_blocks.6.norm2.bias, decoder_blocks.6.mlp.fc1.weight, decoder_blocks.6.mlp.fc1.bias, decoder_blocks.6.mlp.fc2.weight, decoder_blocks.6.mlp.fc2.bias, decode r_blocks.7.norm1.weight, decoder_blocks.7.norm1.bias, decoder_blocks.7.attn.qkv.weight, decoder_blocks.7.attn.qkv.bias, decoder_blocks.7.attn.proj.weight, decoder_blocks.7.attn.proj.bias, decoder_block s.7.norm2.weight, decoder_blocks.7.norm2.bias, decoder_blocks.7.mlp.fc1.weight, decoder_blocks.7.mlp.fc1.bias, decoder_blocks.7.mlp.fc2.weight, decoder_blocks.7.mlp.fc2.bias, decoder_norm.weight, decoder_norm.bias, decoder_pred.weight, decoder_pred.bias

missing keys in source state_dict: fpn1.0.weight, fpn1.0.bias, fpn1.1.weight, fpn1.1.bias, fpn1.1.running_mean, fpn1.1.running_var, fpn1.3.weight, fpn1.3.bias, fpn2.0.weight, fpn2.0.bias

2024-03-11 16:20:46,696 - mmrotate - INFO - initialize SimpleFPN with init_cfg {'type': 'Xavier', 'layer': 'Conv2d', 'distribution': 'uniform'} 2024-03-11 16:20:46,706 - mmrotate - INFO - initialize OrientedRPNHead with init_cfg {'type': 'Normal', 'layer': 'Conv2d', 'std': 0.01} 2024-03-11 16:20:46,706 - mmdet - INFO - loading checkpoint for <class 'mmrotate.models.roi_heads.bbox_heads.rotated_mae_bbox_head_stdc.RotatedMAEBBoxHeadSTDC'> load checkpoint from local path: data/pretrained/mae_pretrain_vit_base_full.pth 2024-03-11 16:20:46,869 - mmdet - WARNING - The model and loaded state dict do not match exactly

unexpected key in source state_dict: cls_token, mask_token, decoder_norm.weight, decoder_norm.bias, decoder_pred.weight, decoder_pred.bias

missing keys in source state_dict: fc_cls.weight, fc_cls.bias, decoder_blocks.2.layer_reg.norms.0.weight, decoder_blocks.2.layer_reg.norms.0.bias, decoder_blocks.2.layer_reg.norms.1.weight, decoder_blo cks.2.layer_reg.norms.1.bias, decoder_blocks.2.layer_reg.norms.2.weight, decoder_blocks.2.layer_reg.norms.2.bias, decoder_blocks.2.layer_reg.convs.0.weight, decoder_blocks.2.layer_reg.convs.0.bias, dec oder_blocks.2.layer_reg.convs.1.weight, decoder_blocks.2.layer_reg.convs.1.bias, decoder_blocks.2.layer_reg.convs.2.weight, decoder_blocks.2.layer_reg.convs.2.bias, decoder_blocks.2.layer_reg.norm_reg. weight, decoder_blocks.2.layer_reg.norm_reg.bias, decoder_blocks.2.layer_reg.fc_reg.weight, decoder_blocks.2.layer_reg.fc_reg.bias, decoder_blocks.3.layer_reg.norms.0.weight, decoder_blocks.3.layer_reg .norms.0.bias, decoder_blocks.3.layer_reg.norms.1.weight, decoder_blocks.3.layer_reg.norms.1.bias, decoder_blocks.3.layer_reg.norms.2.weight, decoder_blocks.3.layer_reg.norms.2.bias, decoder_blocks.3.l ayer_reg.convs.0.weight, decoder_blocks.3.layer_reg.convs.0.bias, decoder_blocks.3.layer_reg.convs.1.weight, decoder_blocks.3.layer_reg.convs.1.bias, decoder_blocks.3.layerreg.convs.2.weight, decoder blocks.3.layer_reg.convs.2.bias, decoder_blocks.3.layer_reg.norm_reg.weight, decoder_blocks.3.layer_reg.norm_reg.bias, decoder_blocks.3.layer_reg.fc_reg.weight, decoder_blocks.3.layer_reg.fc_reg.bias, decoder_blocks.4.layer_reg.norms.0.weight, decoder_blocks.4.layer_reg.norms.0.bias, decoder_blocks.4.layer_reg.norms.1.weight, decoder_blocks.4.layer_reg.norms.1.bias, decoder_blocks.4.layer_reg.convs. 0.weight, decoder_blocks.4.layer_reg.convs.0.bias, decoder_blocks.4.layer_reg.convs.1.weight, decoder_blocks.4.layer_reg.convs.1.bias, decoder_blocks.4.layer_reg.norm_reg.weight, decoder_blocks.4.layer _reg.norm_reg.bias, decoder_blocks.4.layer_reg.fc_reg.weight, decoder_blocks.4.layer_reg.fc_reg.bias, decoder_blocks.5.layer_reg.norms.0.weight, decoder_blocks.5.layer_reg.norms.0.bias, decoder_blocks. 5.layer_reg.norms.1.weight, decoder_blocks.5.layer_reg.norms.1.bias, decoder_blocks.5.layer_reg.convs.0.weight, decoder_blocks.5.layer_reg.convs.0.bias, decoder_blocks.5.layer_reg.convs.1.weight, decod er_blocks.5.layer_reg.convs.1.bias, decoder_blocks.5.layer_reg.norm_reg.weight, decoder_blocks.5.layer_reg.norm_reg.bias, decoder_blocks.5.layer_reg.fc_reg.weight, decoder_blocks.5.layer_reg.fc_reg.bia s, decoder_blocks.6.layer_reg.norms.0.weight, decoder_blocks.6.layer_reg.norms.0.bias, decoder_blocks.6.layer_reg.convs.0.weight, decoder_blocks.6.layer_reg.convs.0.bias, decoder_blocks.6.layer_reg.nor m_reg.weight, decoder_blocks.6.layer_reg.norm_reg.bias, decoder_blocks.6.layer_reg.fc_reg.weight, decoder_blocks.6.layer_reg.fc_reg.bias, decoder_blocks.7.layer_reg.norms.0.weight, decoder_blocks.7.lay er_reg.norms.0.bias, decoder_blocks.7.layer_reg.convs.0.weight, decoder_blocks.7.layer_reg.convs.0.bias, decoder_blocks.7.layer_reg.norm_reg.weight, decoder_blocks.7.layer_reg.norm_reg.bias, decoder_blocks.7.layer_reg.fc_reg.weight, decoder_blocks.7.layer_reg.fc_reg.bias, decoder_box_norm.weight, decoder_box_norm.bias

Traceback (most recent call last): File "D:\Anaconda\envs\mmrotate\lib\site-packages\mmcv\utils\registry.py", line 69, in build_from_cfg return obj_cls(args) File "d:\pycharmprojects\mmrotate\mmrotate\datasets\hrsc.py", line 79, in init super(HRSCDataset, self).init(ann_file, pipeline, kwargs) File "D:\Anaconda\envs\mmrotate\lib\site-packages\mmdet\datasets\custom.py", line 95, in init self.data_infos = self.load_annotations(local_path) File "d:\pycharmprojects\mmrotate\mmrotate\datasets\hrsc.py", line 100, in load_annotations tree = ET.parse(xml_path) File "D:\Anaconda\envs\mmrotate\lib\xml\etree\ElementTree.py", line 1202, in parse tree.parse(source, parser) File "D:\Anaconda\envs\mmrotate\lib\xml\etree\ElementTree.py", line 584, in parse source = open(source, "rb") FileNotFoundError: [Errno 2] No such file or directory: 'data/HRSC2016/FullDataSet/AllImages/data/HRSC2016/FullDataSet/Annotations/100000001.xml'

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "./tools/train.py", line 197, in main() File "./tools/train.py", line 173, in main datasets = [build_dataset(cfg.data.train)] File "d:\pycharmprojects\mmrotate\mmrotate\datasets\builder.py", line 47, in build_dataset dataset = build_from_cfg(cfg, ROTATED_DATASETS, default_args) File "D:\Anaconda\envs\mmrotate\lib\site-packages\mmcv\utils\registry.py", line 72, in build_from_cfg raise type(e)(f'{obj_cls.name}: {e}') FileNotFoundError: HRSCDataset: [Errno 2] No such file or directory: 'data/HRSC2016/FullDataSet/AllImages/data/HRSC2016/FullDataSet/Annotations/100000001.xml'

Additional information

使用的是HRSC数据集

Joey-He commented 8 months ago

感觉是把hrsc.py中的img_prefix和 ann_subdir相加了。但我把img_prefix注释掉,但报错: with open(filepath, 'rb') as f: FileNotFoundError: [Errno 2] No such file or directory: '100001476.bmp' 每次报错的图片目录都不一样,不知道怎么解决

Joey-He commented 8 months ago

找到解决办法了 把mmrotate/datasets/hrsc.py 中的98行代码中self.img_prefix删除

exesit commented 6 months ago

为什么又会出现ValueError: need at least one array to concatenate

PolemoTea commented 1 week ago

找到解决办法了 把mmrotate/datasets/hrsc.py 中的98行代码中self.img_prefix删除

很奇怪,不知是否是mmrotate又更新了,这个hrsc.py似乎没有起到作用,删掉也产生了问题