open-mmlab / mmpose

OpenMMLab Pose Estimation Toolbox and Benchmark.
https://mmpose.readthedocs.io/en/latest/
Apache License 2.0
5.65k stars 1.22k forks source link

The issue is about the configuration file. #3064

Open fachengxionglxq opened 3 months ago

fachengxionglxq commented 3 months ago

📚 The doc issue

data loaders

train_dataloader = dict( batch_size=64, num_workers=2, persistent_workers=True, sampler=dict(type='DefaultSampler', shuffle=True), dataset=dict( type=dataset_type, data_root=data_root, data_mode=data_mode, ann_file='annotations/person_keypoints_train2017.json', data_prefix=dict(img='train2017/'), pipeline=train_pipeline, )) val_dataloader = dict( batch_size=32, num_workers=2, persistent_workers=True, drop_last=False, sampler=dict(type='DefaultSampler', shuffle=False, round_up=False), dataset=dict( type=dataset_type, data_root=data_root, data_mode=data_mode, ann_file='annotations/person_keypoints_val2017.json', bbox_file='data/coco/person_detection_results/' 'COCO_val2017_detections_AP_H_56_person.json', data_prefix=dict(img='val2017/'), test_mode=True, pipeline=val_pipeline, )) test_dataloader =val_dataloader

ievaluators

val_evaluator = dict( type='CocoMetric', ann_file=data_root + 'annotations/person_keypoints_val2017.json') test_evaluator = val_evaluator

Suggest a potential alternative/fix

If you want to obtain the performance on the COCO test-dev2017 set, you can change the test_evaluator setting in the configuration file from val_evaluator to coco_test-dev2017_evaluator or a similar evaluator setting.