open-mmlab / mmdetection

OpenMMLab Detection Toolbox and Benchmark
https://mmdetection.readthedocs.io
Apache License 2.0
29.02k stars 9.37k forks source link

Getting the following data concatenation error during the gflv1('./gfl_r50_fpn_ms-2x_coco.py') training in mmdet 3.3.0 #11828

Open shubhamtheds opened 2 months ago

shubhamtheds commented 2 months ago

TypeError: ConcatDataset.__init__() got an unexpected keyword argument 'data_root'

here we are trying to train gflv1 with multiple datasets and here is the reference config mentioned below


base = './gfl_r50_fpn_ms-2x_coco.py'

Dataset settings

dataset_type = 'CocoDataset' backend_args = None data_root = './Dataset/'

Metadata

metainfo = { 'classes': ('Object', ), 'palette': [ (220, 20, 60), ] }

Pipelines

train_pipeline = [ dict(type='LoadImageFromFile', backend_args=backend_args), dict(type='LoadAnnotations', with_bbox=True), dict(type='Resize', scale=(1333, 800), keep_ratio=True), dict(type='RandomFlip', prob=0.5), dict(type='PackDetInputs') ]

test_pipeline = [ dict(type='LoadImageFromFile', backend_args=backend_args), dict(type='Resize', scale=(1333, 800), keep_ratio=True), dict(type='LoadAnnotations', with_bbox=True), dict( type='PackDetInputs', meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', 'scale_factor')) ]

Training datasets

dataset1 = dict( type='CocoDataset', ann_file='./Dataset/data_1/train_coco.json', img_prefix='./Dataset/data_1/tagged_raw_images/', pipeline=train_pipeline

metainfo=metainfo,

#backend_args=backend_args

)

dataset2 = dict( type='CocoDataset', ann_file='./Dataset/data_2/train_coco.json', img_prefix='./Dataset/data_2/tagged_raw_images/', pipeline=train_pipeline

metainfo=metainfo,

#backend_args=backend_args

)

Validation datasets

val_dataset1 = dict( type=dataset_type, ann_file='./Dataset/data_1/val_coco.json', img_prefix='./Dataset/data_1/tagged_raw_images/', metainfo=metainfo, backend_args=backend_args, test_mode=True )

val_dataset2 = dict( type=dataset_type, ann_file='./Dataset/data_2/val_coco.json', img_prefix='./Dataset/data_2/tagged_raw_images/', metainfo=metainfo, backend_args=backend_args, test_mode=True )

train_dataset = dict( type='ConcatDataset', datasets=[dataset1, dataset2], separate_eval=False )

Data loaders

train_dataloader = dict( batch_size=4, num_workers=1, persistent_workers=True, sampler=dict(type='DefaultSampler', shuffle=True), batch_sampler=dict(type='AspectRatioBatchSampler'), train=train_dataset

dataset=dict(

#     type='ConcatDataset',
#     ignore_keys=['dataset_type'],
#     datasets=[dataset1, dataset2]
# )

)

val_dataloader = dict( batch_size=1, num_workers=1, persistent_workers=True, drop_last=False, sampler=dict(type='DefaultSampler', shuffle=False), dataset=dict( type='ConcatDataset', ignore_keys=['dataset_type'], datasets=[val_dataset1, val_dataset2] ) )

test_dataloader = val_dataloader

Evaluators

val_evaluator = dict( type='CocoMetric', ann_file='./Dataset/val_coco.json', metric='bbox', format_only=False, backend_args=backend_args )

test_evaluator = val_evaluator

Hooks

default_hooks = dict( checkpoint=dict( interval=1, save_best='auto', max_keep_ckpts=3 ) )

custom_hooks = [ dict( type='EMAHook', ema_type='ExpMomentumEMA', momentum=0.0002, update_buffers=True, priority=49 ) ]

sdurmustalipoglu commented 2 months ago

Do you solve the problem?

shubhamtheds commented 1 month ago

not now still facing the same issue