open-mmlab / mmdetection

OpenMMLab Detection Toolbox and Benchmark
https://mmdetection.readthedocs.io
Apache License 2.0
29.08k stars 9.38k forks source link

Having issues with how the classes are read by the model - assistance needed #10896

Open HaoLin97 opened 1 year ago

HaoLin97 commented 1 year ago

I am trying to train and test on a dataset that was converted to COCO format. The class number in the JSON annotation file matches that of the original label files. e.g. 0 is Traffic light and 2 is car. This class naming order is also inputted into the config file as a list in the same order as the annotations. However, when training and evaluating, I noticed that the model seems to get all the ground truth labels jumbled up. The ground truth is showing cars as traffic lights etc. Even though the detection labels seem to be correct. The resulting mAP is very low 0.2. image

Can anybody offer any insight as to the cause of the issue? Any solutions or suggestions would be greatly appreciated.

a2f92c5e-d7fb01ad

I have attached the evaluation log details below:

2023/09/07 13:14:33 - mmengine - INFO -

System environment: sys.platform: linux Python: 3.10.9 (main, Mar 1 2023, 18:23:06) [GCC 11.2.0] CUDA available: True numpy_random_seed: 2053858970 GPU 0,1,2: Tesla T4 CUDA_HOME: /usr NVCC: Cuda compilation tools, release 11.5, V11.5.119 GCC: gcc (Ubuntu 11.3.0-1ubuntu1~22.04.1) 11.3.0 PyTorch: 2.0.1+cu117 PyTorch compiling details: PyTorch built with:

Runtime environment: cudnn_benchmark: False mp_cfg: {'mp_start_method': 'fork', 'opencv_num_threads': 0} dist_cfg: {'backend': 'nccl'} seed: 2053858970 Distributed launcher: none Distributed training: False GPU number: 1

2023/09/07 13:14:33 - mmengine - INFO - Config: model = dict( type='FasterRCNN', data_preprocessor=dict( type='DetDataPreprocessor', mean=[ 123.675, 116.28, 103.53, ], std=[ 58.395, 57.12, 57.375, ], bgr_to_rgb=True, pad_size_divisor=32), backbone=dict( type='ResNet', depth=50, num_stages=4, out_indices=( 0, 1, 2, 3, ), frozen_stages=1, norm_cfg=dict(type='BN', requires_grad=True), norm_eval=True, style='pytorch', init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50')), neck=dict( type='FPN', in_channels=[ 256, 512, 1024, 2048, ], out_channels=256, num_outs=5), rpn_head=dict( type='RPNHead', in_channels=256, feat_channels=256, anchor_generator=dict( type='AnchorGenerator', scales=[ 8, ], ratios=[ 0.5, 1.0, 2.0, ], strides=[ 4, 8, 16, 32, 64, ]), bbox_coder=dict( type='DeltaXYWHBBoxCoder', target_means=[ 0.0, 0.0, 0.0, 0.0, ], target_stds=[ 1.0, 1.0, 1.0, 1.0, ]), loss_cls=dict( type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), loss_bbox=dict(type='L1Loss', loss_weight=1.0)), roi_head=dict( type='StandardRoIHead', bbox_roi_extractor=dict( type='SingleRoIExtractor', roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0), out_channels=256, featmap_strides=[ 4, 8, 16, 32, ]), bbox_head=dict( type='Shared2FCBBoxHead', in_channels=256, fc_out_channels=1024, roi_feat_size=7, num_classes=9, bbox_coder=dict( type='DeltaXYWHBBoxCoder', target_means=[ 0.0, 0.0, 0.0, 0.0, ], target_stds=[ 0.1, 0.1, 0.2, 0.2, ]), reg_class_agnostic=False, loss_cls=dict( type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), loss_bbox=dict(type='L1Loss', loss_weight=1.0))), train_cfg=dict( rpn=dict( assigner=dict( type='MaxIoUAssigner', pos_iou_thr=0.7, neg_iou_thr=0.3, min_pos_iou=0.3, match_low_quality=True, ignore_iof_thr=-1), sampler=dict( type='RandomSampler', num=256, pos_fraction=0.5, neg_pos_ub=-1, add_gt_as_proposals=False), allowed_border=-1, pos_weight=-1, debug=False), rpn_proposal=dict( nms_pre=2000, max_per_img=1000, nms=dict(type='nms', iou_threshold=0.7), min_bbox_size=0), rcnn=dict( assigner=dict( type='MaxIoUAssigner', pos_iou_thr=0.5, neg_iou_thr=0.5, min_pos_iou=0.5, match_low_quality=False, ignore_iof_thr=-1), sampler=dict( type='RandomSampler', num=512, pos_fraction=0.25, neg_pos_ub=-1, add_gt_as_proposals=True), pos_weight=-1, debug=False)), test_cfg=dict( rpn=dict( nms_pre=1000, max_per_img=1000, nms=dict(type='nms', iou_threshold=0.7), min_bbox_size=0), rcnn=dict( score_thr=0.05, nms=dict(type='nms', iou_threshold=0.5), max_per_img=100))) dataset_type = 'CocoDataset' data_root = '/archive_db/Evaluation/Day_Eval/' backend_args = None train_pipeline = [ dict(type='LoadImageFromFile', backend_args=None), dict(type='LoadAnnotations', with_bbox=True), dict(type='Resize', scale=( 1333, 800, ), keep_ratio=True), dict(type='RandomFlip', prob=0.5), dict(type='PackDetInputs'), ] test_pipeline = [ dict(type='LoadImageFromFile', backend_args=None), dict(type='Resize', scale=( 1333, 800, ), keep_ratio=True), dict(type='LoadAnnotations', with_bbox=True), dict( type='PackDetInputs', meta_keys=( 'img_id', 'img_path', 'ori_shape', 'img_shape', 'scale_factor', )), ] train_dataloader = dict( batch_size=2, num_workers=2, persistent_workers=True, sampler=dict(type='DefaultSampler', shuffle=True), batch_sampler=dict(type='AspectRatioBatchSampler'), dataset=dict( type='CocoDataset', data_root='data/coco/', ann_file='annotations/instances_train2017.json', data_prefix=dict(img='train2017/'), filter_cfg=dict(filter_empty_gt=True, min_size=32), pipeline=[ dict(type='LoadImageFromFile', backend_args=None), dict(type='LoadAnnotations', with_bbox=True), dict(type='Resize', scale=( 1333, 800, ), keep_ratio=True), dict(type='RandomFlip', prob=0.5), dict(type='PackDetInputs'), ], backend_args=None)) val_dataloader = dict( batch_size=1, num_workers=2, persistent_workers=True, drop_last=False, sampler=dict(type='DefaultSampler', shuffle=False), dataset=dict( type='CocoDataset', data_root='data/coco/', ann_file='annotations/instances_val2017.json', data_prefix=dict(img='val2017/'), test_mode=True, pipeline=[ dict(type='LoadImageFromFile', backend_args=None), dict(type='Resize', scale=( 1333, 800, ), keep_ratio=True), dict(type='LoadAnnotations', with_bbox=True), dict( type='PackDetInputs', meta_keys=( 'img_id', 'img_path', 'ori_shape', 'img_shape', 'scale_factor', )), ], backend_args=None)) test_dataloader = dict( batch_size=16, num_workers=2, persistent_workers=True, drop_last=False, sampler=dict(type='DefaultSampler', shuffle=False), dataset=dict( type='CocoDataset', data_root='/archive_db/Evaluation/Day_Eval/images', ann_file='/archive_db/Evaluation/Day_Eval/labels/Day_Eval.json', data_prefix=dict(img=''), test_mode=True, pipeline=[ dict(type='LoadImageFromFile', backend_args=None), dict(type='Resize', scale=( 1333, 800, ), keep_ratio=True), dict(type='LoadAnnotations', with_bbox=True), dict( type='PackDetInputs', meta_keys=( 'img_id', 'img_path', 'ori_shape', 'img_shape', 'scale_factor', )), ], backend_args=None, metainfo=dict( classes=( 'traffic light', 'traffic sign', 'car', 'pedestrian', 'bus', 'truck', 'rider', 'bicycle', 'motorcycle', )))) val_evaluator = dict( type='CocoMetric', ann_file='data/coco/annotations/instances_val2017.json', metric='bbox', format_only=False, backend_args=None) test_evaluator = dict( type='CocoMetric', ann_file='/archive_db/Evaluation/Day_Eval/labels/Day_Eval.json', metric='bbox', format_only=False, backend_args=None) train_cfg = dict(type='EpochBasedTrainLoop', max_epochs=12, val_interval=1) val_cfg = dict(type='ValLoop') test_cfg = dict(type='TestLoop') param_scheduler = [ dict( type='LinearLR', start_factor=0.001, by_epoch=False, begin=0, end=500), dict( type='MultiStepLR', begin=0, end=12, by_epoch=True, milestones=[ 8, 11, ], gamma=0.1), ] optim_wrapper = dict( type='AmpOptimWrapper', optimizer=dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.0001)) auto_scale_lr = dict(enable=False, base_batch_size=16) default_scope = 'mmdet' default_hooks = dict( timer=dict(type='IterTimerHook'), logger=dict(type='LoggerHook', interval=50), param_scheduler=dict(type='ParamSchedulerHook'), checkpoint=dict(type='CheckpointHook', interval=1), sampler_seed=dict(type='DistSamplerSeedHook'), visualization=dict( type='DetVisualizationHook', draw=True, test_out_dir='d10_day_eval')) env_cfg = dict( cudnn_benchmark=False, mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0), dist_cfg=dict(backend='nccl')) vis_backends = [ dict(type='LocalVisBackend'), ] visualizer = dict( type='DetLocalVisualizer', vis_backends=[ dict(type='LocalVisBackend'), ], name='visualizer') log_processor = dict(type='LogProcessor', window_size=50, by_epoch=True) log_level = 'INFO' load_from = 'work_dirs/d10n0/epoch_48.pth' resume = False metainfo = dict( classes=( 'traffic light', 'traffic sign', 'car', 'pedestrian', 'bus', 'truck', 'rider', 'bicycle', 'motorcycle', )) launcher = 'none' work_dir = './work_dirs/day_eval'

2023/09/07 13:14:36 - mmengine - INFO - Distributed training is not used, all SyncBatchNorm (SyncBN) layers in the model will be automatically reverted to BatchNormXd layers if they are used. 2023/09/07 13:14:36 - mmengine - INFO - Hooks will be executed in the following order: before_run: (VERY_HIGH ) RuntimeInfoHook
(BELOW_NORMAL) LoggerHook


before_train: (VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(VERY_LOW ) CheckpointHook


before_train_epoch: (VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(NORMAL ) DistSamplerSeedHook


before_train_iter: (VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook


after_train_iter: (VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(BELOW_NORMAL) LoggerHook
(LOW ) ParamSchedulerHook
(VERY_LOW ) CheckpointHook


after_train_epoch: (NORMAL ) IterTimerHook
(LOW ) ParamSchedulerHook
(VERY_LOW ) CheckpointHook


before_val_epoch: (NORMAL ) IterTimerHook


before_val_iter: (NORMAL ) IterTimerHook


after_val_iter: (NORMAL ) IterTimerHook
(NORMAL ) DetVisualizationHook
(BELOW_NORMAL) LoggerHook


after_val_epoch: (VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(BELOW_NORMAL) LoggerHook
(LOW ) ParamSchedulerHook
(VERY_LOW ) CheckpointHook


after_train: (VERY_LOW ) CheckpointHook


before_test_epoch: (NORMAL ) IterTimerHook


before_test_iter: (NORMAL ) IterTimerHook


after_test_iter: (NORMAL ) IterTimerHook
(NORMAL ) DetVisualizationHook
(BELOW_NORMAL) LoggerHook


after_test_epoch: (VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(BELOW_NORMAL) LoggerHook


after_run: (BELOW_NORMAL) LoggerHook


2023/09/07 13:14:38 - mmengine - INFO - Load checkpoint from work_dirs/d10n0/epoch_48.pth 2023/09/07 13:19:52 - mmengine - INFO - Epoch(test) [ 50/284] eta: 0:24:08 time: 6.1888 data_time: 4.4770 memory: 9141
2023/09/07 13:25:06 - mmengine - INFO - Epoch(test) [100/284] eta: 0:19:06 time: 6.2698 data_time: 4.5348 memory: 9141
2023/09/07 13:30:20 - mmengine - INFO - Epoch(test) [150/284] eta: 0:13:57 time: 6.2919 data_time: 4.5384 memory: 9141
2023/09/07 13:35:36 - mmengine - INFO - Epoch(test) [200/284] eta: 0:08:46 time: 6.3141 data_time: 4.5600 memory: 9141
2023/09/07 13:40:50 - mmengine - INFO - Epoch(test) [250/284] eta: 0:03:33 time: 6.2834 data_time: 4.5339 memory: 9141
2023/09/07 13:44:23 - mmengine - INFO - Evaluating bbox... 2023/09/07 13:44:44 - mmengine - INFO - bbox_mAP_copypaste: 0.135 0.221 0.146 0.047 0.147 0.209 2023/09/07 13:44:44 - mmengine - INFO - Epoch(test) [284/284] coco/bbox_mAP: 0.1350 coco/bbox_mAP_50: 0.2210 coco/bbox_mAP_75: 0.1460 coco/bbox_mAP_s: 0.0470 coco/bbox_mAP_m: 0.1470 coco/bbox_mAP_l: 0.2090 data_time: 4.5315 time: 6.2701

HaoLin97 commented 1 year ago

@hhaAndroid

Brym-Gyimah commented 11 months ago

Hey, how did you solve this problem

ccomkhj commented 5 months ago

I also faced the same problem sometimes.