open-mmlab / mmdetection

OpenMMLab Detection Toolbox and Benchmark
https://mmdetection.readthedocs.io
Apache License 2.0
29.06k stars 9.37k forks source link

IndexError: CocoDataset: list index out of range #7556

Open mukaiNO1 opened 2 years ago

mukaiNO1 commented 2 years ago

when i try to train my coco dataset with fast_rcnn, and error , I also change the num-classes. and i use yolo to train successfully. Traceback (most recent call last): File "tools\train.py", line 209, in main() File "tools\train.py", line 185, in main datasets = [build_dataset(cfg.data.train)] File "C:\Users\mukai\anaconda3\envs\open-mmlab\lib\site-packages\mmdet\datasets\builder.py", line 81, in build_dataset dataset = build_from_cfg(cfg, DATASETS, default_args) File "C:\Users\mukai\anaconda3\envs\open-mmlab\lib\site-packages\mmcv\utils\registry.py", line 55, in build_from_cfg raise type(e)(f'{obj_cls.name}: {e}') IndexError: CocoDataset: list index out of range

mukaiNO1 commented 2 years ago

Traceback (most recent call last): File "C:\Users\mukai\anaconda3\envs\open-mmlab\lib\site-packages\mmcv\utils\registry.py", line 52, in build_from_cfg return obj_cls(**args) File "C:\Users\mukai\anaconda3\envs\open-mmlab\lib\site-packages\mmdet\datasets\custom.py", line 124, in init self.proposals = [self.proposals[i] for i in valid_inds] File "C:\Users\mukai\anaconda3\envs\open-mmlab\lib\site-packages\mmdet\datasets\custom.py", line 124, in self.proposals = [self.proposals[i] for i in valid_inds] IndexError: list index out of range

mukaiNO1 commented 2 years ago

C:\Users\mukai\anaconda3\envs\open-mmlab\lib\site-packages\mmdet\utils\setup_env.py:33: UserWarning: Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. f'Setting OMP_NUM_THREADS environment variable for each process ' C:\Users\mukai\anaconda3\envs\open-mmlab\lib\site-packages\mmdet\utils\setup_env.py:43: UserWarning: Setting MKL_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. f'Setting MKL_NUM_THREADS environment variable for each process ' 'tail' 不是内部或外部命令,也不是可运行的程序 或批处理文件。 'head' 不是内部或外部命令,也不是可运行的程序 或批处理文件。 fatal: not a git repository (or any of the parent directories): .git 2022-03-28 21:05:13,122 - mmdet - INFO - Environment info:

sys.platform: win32 Python: 3.7.12 | packaged by conda-forge | (default, Oct 26 2021, 05:35:01) [MSC v.1916 64 bit (AMD64)] CUDA available: True GPU 0: NVIDIA GeForce RTX 3070 Laptop GPU CUDA_HOME: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.1 NVCC: Not Available GCC: n/a PyTorch: 1.11.0 PyTorch compiling details: PyTorch built with:

TorchVision: 0.12.0 OpenCV: 4.5.5 MMCV: 1.4.7 MMCV Compiler: MSVC 191627045 MMCV CUDA Compiler: 11.1 MMDetection: 2.22.0+

2022-03-28 21:05:13,717 - mmdet - INFO - Distributed training: False 2022-03-28 21:05:14,313 - mmdet - INFO - Config: model = dict( type='FastRCNN', backbone=dict( type='ResNet', depth=50, num_stages=4, out_indices=(0, 1, 2, 3), frozen_stages=1, norm_cfg=dict(type='BN', requires_grad=True), norm_eval=True, style='pytorch', init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50')), neck=dict( type='FPN', in_channels=[256, 512, 1024, 2048], out_channels=256, num_outs=5), roi_head=dict( type='StandardRoIHead', bbox_roi_extractor=dict( type='SingleRoIExtractor', roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0), out_channels=256, featmap_strides=[4, 8, 16, 32]), bbox_head=dict( type='Shared2FCBBoxHead', in_channels=256, fc_out_channels=1024, roi_feat_size=7, num_classes=1, bbox_coder=dict( type='DeltaXYWHBBoxCoder', target_means=[0.0, 0.0, 0.0, 0.0], target_stds=[0.1, 0.1, 0.2, 0.2]), reg_class_agnostic=False, loss_cls=dict( type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), loss_bbox=dict(type='L1Loss', loss_weight=1.0))), train_cfg=dict( rcnn=dict( assigner=dict( type='MaxIoUAssigner', pos_iou_thr=0.5, neg_iou_thr=0.5, min_pos_iou=0.5, match_low_quality=False, ignore_iof_thr=-1), sampler=dict( type='RandomSampler', num=512, pos_fraction=0.25, neg_pos_ub=-1, add_gt_as_proposals=True), pos_weight=-1, debug=False)), test_cfg=dict( rcnn=dict( score_thr=0.05, nms=dict(type='nms', iou_threshold=0.5), max_per_img=100))) dataset_type = 'CocoDataset' data_root = 'data/coco/' img_norm_cfg = dict( mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) train_pipeline = [ dict(type='LoadImageFromFile'), dict(type='LoadProposals', num_max_proposals=2000), dict(type='LoadAnnotations', with_bbox=True), dict(type='Resize', img_scale=(1333, 800), keep_ratio=True), dict(type='RandomFlip', flip_ratio=0.5), dict( type='Normalize', mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True), dict(type='Pad', size_divisor=32), dict(type='DefaultFormatBundle'), dict(type='Collect', keys=['img', 'proposals', 'gt_bboxes', 'gt_labels']) ] test_pipeline = [ dict(type='LoadImageFromFile'), dict(type='LoadProposals', num_max_proposals=None), dict( type='MultiScaleFlipAug', img_scale=(1333, 800), flip=False, transforms=[ dict(type='Resize', keep_ratio=True), dict(type='RandomFlip'), dict( type='Normalize', mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True), dict(type='Pad', size_divisor=32), dict(type='ImageToTensor', keys=['img']), dict(type='ToTensor', keys=['proposals']), dict( type='ToDataContainer', fields=[dict(key='proposals', stack=False)]), dict(type='Collect', keys=['img', 'proposals']) ]) ] data = dict( samples_per_gpu=2, workers_per_gpu=2, train=dict( type='CocoDataset', ann_file='data/coco/annotations/instances_train2017.json', img_prefix='data/coco/train2017/', pipeline=[ dict(type='LoadImageFromFile'), dict(type='LoadProposals', num_max_proposals=2000), dict(type='LoadAnnotations', with_bbox=True), dict(type='Resize', img_scale=(1333, 800), keep_ratio=True), dict(type='RandomFlip', flip_ratio=0.5), dict( type='Normalize', mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True), dict(type='Pad', size_divisor=32), dict(type='DefaultFormatBundle'), dict( type='Collect', keys=['img', 'proposals', 'gt_bboxes', 'gt_labels']) ], proposal_file='data/coco/proposals/rpn_r50_fpn_1x_train2017.pkl'), val=dict( type='CocoDataset', ann_file='data/coco/annotations/instances_val2017.json', img_prefix='data/coco/val2017/', pipeline=[ dict(type='LoadImageFromFile'), dict(type='LoadProposals', num_max_proposals=None), dict( type='MultiScaleFlipAug', img_scale=(1333, 800), flip=False, transforms=[ dict(type='Resize', keep_ratio=True), dict(type='RandomFlip'), dict( type='Normalize', mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True), dict(type='Pad', size_divisor=32), dict(type='ImageToTensor', keys=['img']), dict(type='ToTensor', keys=['proposals']), dict( type='ToDataContainer', fields=[dict(key='proposals', stack=False)]), dict(type='Collect', keys=['img', 'proposals']) ]) ], proposal_file='data/coco/proposals/rpn_r50_fpn_1x_val2017.pkl'), test=dict( type='CocoDataset', ann_file='data/coco/annotations/instances_val2017.json', img_prefix='data/coco/val2017/', pipeline=[ dict(type='LoadImageFromFile'), dict(type='LoadProposals', num_max_proposals=None), dict( type='MultiScaleFlipAug', img_scale=(1333, 800), flip=False, transforms=[ dict(type='Resize', keep_ratio=True), dict(type='RandomFlip'), dict( type='Normalize', mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True), dict(type='Pad', size_divisor=32), dict(type='ImageToTensor', keys=['img']), dict(type='ToTensor', keys=['proposals']), dict( type='ToDataContainer', fields=[dict(key='proposals', stack=False)]), dict(type='Collect', keys=['img', 'proposals']) ]) ], proposal_file='data/coco/proposals/rpn_r50_fpn_1x_val2017.pkl')) evaluation = dict(interval=1, metric='bbox') optimizer = dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.0001) optimizer_config = dict(grad_clip=None) lr_config = dict( policy='step', warmup='linear', warmup_iters=500, warmup_ratio=0.001, step=[8, 11]) runner = dict(type='EpochBasedRunner', max_epochs=12) checkpoint_config = dict(interval=1) log_config = dict(interval=50, hooks=[dict(type='TextLoggerHook')]) custom_hooks = [dict(type='NumClassCheckHook')] dist_params = dict(backend='nccl') log_level = 'INFO' load_from = None resume_from = None workflow = [('train', 1)] opencv_num_threads = 0 mp_start_method = 'fork' work_dir = 'pinecone_fast_rcnn' auto_resume = False gpu_ids = [0]

2022-03-28 21:05:14,315 - mmdet - INFO - Set random seed to 240299280, deterministic: False 2022-03-28 21:05:14,503 - mmdet - INFO - initialize ResNet with init_cfg {'type': 'Pretrained', 'checkpoint': 'torchvision://resnet50'} 2022-03-28 21:05:14,503 - mmcv - INFO - load model from: torchvision://resnet50 2022-03-28 21:05:14,504 - mmcv - INFO - load checkpoint from torchvision path: torchvision://resnet50 2022-03-28 21:05:14,578 - mmcv - WARNING - The model and loaded state dict do not match exactly

unexpected key in source state_dict: fc.weight, fc.bias

2022-03-28 21:05:14,595 - mmdet - INFO - initialize FPN with init_cfg {'type': 'Xavier', 'layer': 'Conv2d', 'distribution': 'uniform'} 2022-03-28 21:05:14,609 - mmdet - INFO - initialize Shared2FCBBoxHead with init_cfg [{'type': 'Normal', 'std': 0.01, 'override': {'name': 'fc_cls'}}, {'type': 'Normal', 'std': 0.001, 'override': {'name': 'fc_reg'}}, {'type': 'Xavier', 'distribution': 'uniform', 'override': [{'name': 'shared_fcs'}, {'name': 'cls_fcs'}, {'name': 'reg_fcs'}]}] loading annotations into memory... Done (t=0.05s) creating index... index created! Traceback (most recent call last): File "C:\Users\mukai\anaconda3\envs\open-mmlab\lib\site-packages\mmcv\utils\registry.py", line 52, in build_from_cfg return obj_cls(**args) File "C:\Users\mukai\anaconda3\envs\open-mmlab\lib\site-packages\mmdet\datasets\custom.py", line 124, in init self.proposals = [self.proposals[i] for i in valid_inds] File "C:\Users\mukai\anaconda3\envs\open-mmlab\lib\site-packages\mmdet\datasets\custom.py", line 124, in self.proposals = [self.proposals[i] for i in valid_inds] IndexError: list index out of range

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "tools\train.py", line 209, in main() File "tools\train.py", line 185, in main datasets = [build_dataset(cfg.data.train)] File "C:\Users\mukai\anaconda3\envs\open-mmlab\lib\site-packages\mmdet\datasets\builder.py", line 81, in build_dataset dataset = build_from_cfg(cfg, DATASETS, default_args) File "C:\Users\mukai\anaconda3\envs\open-mmlab\lib\site-packages\mmcv\utils\registry.py", line 55, in build_from_cfg raise type(e)(f'{obj_cls.name}: {e}') IndexError: CocoDataset: list index out of range