open-mmlab / mmpose

OpenMMLab Pose Estimation Toolbox and Benchmark.
https://mmpose.readthedocs.io/en/latest/
Apache License 2.0
5.7k stars 1.23k forks source link

[Bug] Didn't found any Halpe dataset config...... #2308

Closed luohao123 closed 1 year ago

luohao123 commented 1 year ago

Prerequisite

Environment

[Bug] Didn't found any Halpe dataset config......

Reproduces the problem - code sample

[Bug] Didn't found any Halpe dataset config......

Reproduces the problem - command or script

[Bug] Didn't found any Halpe dataset config......

Reproduces the problem - error message

[Bug] Didn't found any Halpe dataset config......

Additional information

[Bug] Didn't found any Halpe dataset config......

image

Please give some practical suggestions train on Halpe...

LareinaM commented 1 year ago

Hi, you are on the model zoo page now. If you are on version 1.x, you can refer to the dataset documentation here. Please check your MMPose version and document version and read through our guide before training in MMPose.

luohao123 commented 1 year ago

@LareinaM I stilll didn't found any example configs to reuse......

Tau-J commented 1 year ago

Hi @luohao123 , currently the training configs are not migrated from 0.x . Would you like tu create a PR to support it? Just refer to the configs of coco-wholebody.

luohao123 commented 1 year ago

@Tau-J I'd like to but I am not very famalliar how to start. If you guys can have a wholebody halpe config, I'll help write a 26 one. Currently mmpose really lack some 26 kyepoints model..... (In real world, we need hand, face but not whole133body it's tooooo heavy).

Please consider add at least halp wholebody support since it was supported before.

Tau-J commented 1 year ago

Thanks for your feedback, we'll consider to support 26-kpt models in RTMPose. For halpe, I suggest having a look on the config of coco-wholebody, you can easily write a halpe version by modifying number of keypoints and annotation files

luohao123 commented 1 year ago

@Tau-J Would help take a look did I missed anythign at this configuration?

# base dataset settings
dataset_type = 'HalpePoseDataset'
data_mode = 'topdown'
data_root = 'data/halpe/'

# data loaders
train_dataloader = dict(
    batch_size=64,
    num_workers=10,
    persistent_workers=True,
    sampler=dict(type='DefaultSampler', shuffle=True),
    dataset=dict(
        type=dataset_type,
        data_root=data_root,
        data_mode=data_mode,
        ann_file='annotations/halpe_train_v1.json',
        data_prefix=dict(img='hico_20160224_det/images/train2015'),
        pipeline=train_pipeline,
    ))
val_dataloader = dict(
    batch_size=32,
    num_workers=10,
    persistent_workers=True,
    drop_last=False,
    sampler=dict(type='DefaultSampler', shuffle=False, round_up=False),
    dataset=dict(
        type=dataset_type,
        data_root=data_root,
        data_mode=data_mode,
        ann_file='annotations/halpe_val_v1.json',
        bbox_file='person_detection_results/COCO_val2017_detections_AP_H_56_person.json',
        data_prefix=dict(img='val2017/'),
        test_mode=True,
        pipeline=val_pipeline,
    ))
test_dataloader = val_dataloader

# hooks
default_hooks = dict(
    checkpoint=dict(
        save_best='halpe-wholebody/AP', rule='greater', max_keep_ckpts=1))

custom_hooks = [
    dict(
        type='EMAHook',
        ema_type='ExpMomentumEMA',
        momentum=0.0002,
        update_buffers=True,
        priority=49),
    dict(
        type='mmdet.PipelineSwitchHook',
        switch_epoch=max_epochs - stage2_num_epochs,
        switch_pipeline=train_pipeline_stage2)
]

# evaluators
val_evaluator = dict(
    type='CocoWholeBodyMetric',
    ann_file=data_root + 'person_detection_results/COCO_val2017_detections_AP_H_56_person.json',
    use_area=False,
    iou_type='keypoints_crowd',
    prefix='crowdpose')
test_evaluator = val_evaluator

where should I insert a KeypointConverter