open-mmlab / mmpose

OpenMMLab Pose Estimation Toolbox and Benchmark.
https://mmpose.readthedocs.io/en/latest/
Apache License 2.0
5.82k stars 1.24k forks source link

[Docs] RTMPose-t: got different results by using demo of inferencing with onnxruntime in Python and SDK Python API after model deployed #2888

Closed MianMianMeow closed 9 months ago

MianMianMeow commented 10 months ago

šŸ“š The doc issue

Hi, Thanks for your great work. After deployment, I'm trying to inference with onnxruntime in Python of RTMPose-t model trained by my own dataset, but find the inference demo with onnxruntime only https://github.com/open-mmlab/mmpose/blob/main/projects/rtmpose/examples/onnxruntime/main.py gives a wrong result which is different from the prediction of torch model. However, when I use the example of SDK Python API from https://github.com/open-mmlab/mmpose/tree/main/projects/rtmpose with the deployed model, the result is the same as the prediction of torch model.

Below is my config for training:

 _base_ = ['mmpose::_base_/default_runtime.py', '_base_/datasets/card_keypoints.py']
num_keypoints = 4
input_size = (256, 256)

# runtime
max_epochs = 270
stage2_num_epochs = 40
base_lr = 4e-3 
train_batch_size = 128
val_batch_size = 32

load_from = 'work_dirs/rtmpose-t_ina/best_coco_AP_epoch_250_bk.pth'

train_cfg = dict(max_epochs=max_epochs, val_interval=5)
randomness = dict(seed=21)

optim_wrapper = dict(
    type='OptimWrapper',
    optimizer=dict(type='AdamW', lr=base_lr, weight_decay=0.),
    clip_grad=dict(max_norm=35, norm_type=2),
    paramwise_cfg=dict(
        norm_decay_mult=0, bias_decay_mult=0, bypass_duplicate=True))

# learning rate
param_scheduler = [
    dict(
        type='LinearLR',
        start_factor=1.0e-5,
        by_epoch=False,
        begin=0,
        end=1000),
    dict(
        type='CosineAnnealingLR',
        eta_min=base_lr * 0.05,
        begin=max_epochs // 2,
        end=max_epochs,
        T_max=max_epochs // 2,
        by_epoch=True,
        convert_to_iter_based=True),
]

# automatically scaling LR based on the actual training batch size
auto_scale_lr = dict(base_batch_size=512)

# codec settings
codec = dict(
    type='SimCCLabel',
    input_size=input_size,
    sigma=(6.66, 6.66),
    simcc_split_ratio=2.0,
    normalize=False,
    use_dark=False)

# model settings
model = dict(
    type='TopdownPoseEstimator',
    data_preprocessor=dict(
        type='PoseDataPreprocessor',
        mean=[123.675, 116.28, 103.53],
        std=[58.395, 57.12, 57.375],
        bgr_to_rgb=True),
    backbone=dict(
        _scope_='mmdet',
        type='CSPNeXt',
        arch='P5',
        expand_ratio=0.5,
        deepen_factor=0.167,
        widen_factor=0.375,
        out_indices=(4, ),
        channel_attention=True,
        norm_cfg=dict(type='SyncBN'),
        act_cfg=dict(type='SiLU'),
        init_cfg=dict(
            type='Pretrained',
            prefix='backbone.',
            checkpoint='https://download.openmmlab.com/mmdetection/v3.0/'
            'rtmdet/cspnext_rsb_pretrain/cspnext-tiny_imagenet_600e-3a2dd350.pth'  # noqa
        )),
    head=dict(
        type='RTMCCHead',
        in_channels=384,
        out_channels=num_keypoints,
        input_size=codec['input_size'],
        in_featuremap_size=tuple([s // 32 for s in codec['input_size']]),
        simcc_split_ratio=codec['simcc_split_ratio'],
        final_layer_kernel_size=7,
        gau_cfg=dict(
            hidden_dims=256,
            s=128,
            expansion_factor=2,
            dropout_rate=0.,
            drop_path=0.,
            act_fn='SiLU',
            use_rel_bias=False,
            pos_enc=False),
        loss=dict(
            type='KLDiscretLoss',
            use_target_weight=True,
            beta=10.,
            label_softmax=True),
        decoder=codec),
    test_cfg=dict(flip_test=True, ))

# base dataset settings
dataset_type = 'CocoDataset'
data_mode = 'topdown'
data_root = '/home/jovyan/mmpose/data/ina_card_aug/'

backend_args = dict(backend='local')

# pipelines
train_pipeline = [
    dict(type='LoadImage', backend_args=backend_args),
    dict(type='GetBBoxCenterScale', padding=1.5),
    dict(type='RandomFlip', direction='diagonal'), 
#     dict(type='RandomHalfBody'), 
    dict(
        type='RandomBBoxTransform', scale_factor=[0.9, 1.1], rotate_factor=90),

    dict(type='TopdownAffine', input_size=codec['input_size']),
    dict(type='mmdet.YOLOXHSVRandomAug'),
    dict(
        type='Albumentation',
        transforms=[
            dict(type='Blur', p=0.1),
            dict(type='MedianBlur', p=0.1),
            dict(
                type='CoarseDropout',
                max_holes=1,
                max_height=0.2,
                max_width=0.2,
                min_holes=1,
                min_height=0.05,
                min_width=0.05,
                p=0.5),
        ]),
    dict(type='GenerateTarget', encoder=codec),
    dict(type='PackPoseInputs')
]
val_pipeline = [
    dict(type='LoadImage', backend_args=backend_args),
    dict(type='GetBBoxCenterScale'), 
    dict(type='TopdownAffine', input_size=codec['input_size']),
    dict(type='PackPoseInputs')
]

train_pipeline_stage2 = [
    dict(type='LoadImage', backend_args=backend_args),
    dict(type='GetBBoxCenterScale', padding=1.5), 
    dict(type='RandomFlip', direction='diagonal'),
#     dict(type='RandomHalfBody'),
    dict(
        type='RandomBBoxTransform',
        shift_factor=0.,
        scale_factor=[0.9, 1.1],
        rotate_factor=60),

    dict(type='TopdownAffine', input_size=codec['input_size']),
    dict(type='mmdet.YOLOXHSVRandomAug'),
    dict(
        type='Albumentation',
        transforms=[
            dict(type='Blur', p=0.05),
            dict(type='MedianBlur', p=0.05),
            dict(
                type='CoarseDropout',
                max_holes=1,
                max_height=0.2,
                max_width=0.2,
                min_holes=1,
                min_height=0.05,
                min_width=0.05,
                p=0.2),
        ]),
    dict(type='GenerateTarget', encoder=codec),
    dict(type='PackPoseInputs')
]

# data loaders
train_dataloader = dict(
    batch_size=train_batch_size,
    num_workers=10,
    persistent_workers=True,
    sampler=dict(type='DefaultSampler', shuffle=True),
    dataset=dict(
        type=dataset_type,
        data_root=data_root,
        data_mode=data_mode,
        ann_file='annotations/train.json',
        data_prefix=dict(img='images/Train'),
        pipeline=train_pipeline,
        metainfo=dict(from_file='configs/_base_/datasets/card_keypoints.py')
    ))
val_dataloader = dict(
    batch_size=val_batch_size,
    num_workers=4,
    persistent_workers=True,
    drop_last=False,
    sampler=dict(type='DefaultSampler', shuffle=False, round_up=False),
    dataset=dict(
        type=dataset_type,
        data_root=data_root,
        data_mode=data_mode,
        ann_file='annotations/val.json',
        data_prefix=dict(img='images/Validation'),
        test_mode=True,
        pipeline=val_pipeline,
        metainfo=dict(from_file='configs/_base_/datasets/card_keypoints.py')
    ))
test_dataloader = dict(
    batch_size=val_batch_size,
    num_workers=4,
    persistent_workers=True,
    drop_last=False,
    sampler=dict(type='DefaultSampler', shuffle=False, round_up=False),
    dataset=dict(
        type=dataset_type,
        data_root=data_root,
        data_mode=data_mode,
        ann_file='annotations/val.json',
        data_prefix=dict(img='images/Validation'),
        test_mode=True,
        pipeline=val_pipeline,
        metainfo=dict(from_file='configs/_base_/datasets/card_keypoints.py')
    ))
test_dataloader = val_dataloader

# hooks
default_hooks = dict(
    checkpoint=dict(save_best='coco/AP', rule='greater', max_keep_ckpts=1))

custom_hooks = [
    dict(
        type='EMAHook',
        ema_type='ExpMomentumEMA',
        momentum=0.0002,
        update_buffers=True,
        priority=49),
    dict(
        type='mmdet.PipelineSwitchHook',
        switch_epoch=max_epochs - stage2_num_epochs,
        switch_pipeline=train_pipeline_stage2)
]

# evaluators
val_evaluator = dict(
    type='CocoMetric',
    ann_file=data_root + 'annotations/val.json',
)
test_evaluator = val_evaluator

Below is my command of model deployment:

python mmdeploy/tools/deploy.py \
    mmdeploy/configs/mmpose/pose-detection_simcc_onnxruntime_dynamic_card.py \
    mmpose/configs/rtmpose-t_ina.py \
    mmpose/published_models/rtmpose_card.pth \
    test_images/kp_input2.jpg \
    --work-dir onnx_models/rtmpose-t_card \
    --device cpu \
    --dump-info  

Suggest a potential alternative/fix

No response

Ben-Louis commented 10 months ago

The preprocessing step in https://github.com/open-mmlab/mmpose/blob/main/projects/rtmpose/examples/onnxruntime/main.py is equivalent to the transform dict(type='GetBBoxCenterScale', padding=1.25). However, the padding factor in your config val_pipeline is set to a default value 1.0. You can try to modify the factor in the following line to 1.0 and run again https://github.com/open-mmlab/mmpose/blob/efe09cd5268d4d6b21100334fbf2947ef36fc7db/projects/rtmpose/examples/onnxruntime/main.py#L49

MianMianMeow commented 10 months ago

awesome! This exactly solved my problem. Thanks a lot.

MianMianMeow commented 9 months ago

sorry to reopen this issue. I noticed after I modified padding=1.25 to 1 as my config, some of the images still got the wrong results, although inference by SDK Python API with onnx models is still correct.

I also tried [https://github.com/Tau-J/rtmlib.git]()

from rtmlib.tools.pose_estimation.rtmpose import RTMPose
rtmpose = RTMPose(onnx_model='models/rtmpose_card.onnx',
                  model_input_size=(256, 256))

img = cv2.imread('images/RU1IQW9KMFNaaGdy20240122.jpg')
keypoints, scores = rtmpose(img, bboxes=[])

which still gives the wrong result and the result is the same as [https://github.com/open-mmlab/mmpose/blob/main/projects/rtmpose/examples/onnxruntime/main.py]()

Really appreciate for you help.

MianMianMeow commented 9 months ago

Solved. The input image is RGB rather than BGR. Just adding

img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)

in line 443 in https://github.com/open-mmlab/mmpose/blob/main/projects/rtmpose/examples/onnxruntime/main.py can solve my problem.