open-mmlab / mmdeploy

OpenMMLab Model Deployment Framework
https://mmdeploy.readthedocs.io/en/latest/
Apache License 2.0
2.74k stars 628 forks source link

[Bug] Converting to ONNX: Dynamic batch size on input gives all dynamic axes on output. #2731

Open erogodsky opened 6 months ago

erogodsky commented 6 months ago

Checklist

Describe the bug

If I use static inputs, I have all shapes right: Screenshot 2024-04-08 142201 But if I define dynamic_axes as follows, I get the wrong shapes on output: Screenshot 2024-04-08 142445 Which should be [batch_size, 3, 112, 112]

Reproduction

ONNX config:

onnx_config = dict(
    type='onnx',
    export_params=True,
    keep_initializers_as_inputs=False,
    opset_version=16,
    save_file='end2end.onnx',
    input_names=['INPUT'],
    output_names=['OUTPUT'],
    input_shape=[112, 112],
    optimize=True,
    dynamic_axes={'INPUT': {0: 'batch_size'},
                  'OUTPUT': {0: 'batch_size'}})

backend_config = dict(
    type='onnxruntime',
    precision='fp16',
    common_config=dict(
        min_positive_val=1e-7,
        max_finite_val=1e4,
        keep_io_types=True,
        disable_shape_infer=False,
        op_block_list=None,
        node_block_list=None))

codebase_config = dict(type='mmseg', task='Segmentation', with_argmax=False)

Model config:

default_scope = 'mmseg'

DATA_ROOT = "path/to/data"
IMG_SIZE = (112, 112)
BATCH_SIZE = 64
VAL_BATCH_SIZE = BATCH_SIZE
N_CLASSES = 3

dataset_type = 'BaseSegDataset'
classes = ['background', 'rcSleeperCrack', 'rcSleeperChipped']
palette = [[128, 128, 128], [128, 0, 0], [0, 128, 0]]
class_weights = [1.0, 71.02222690458944, 152.7808631308181]

data_preprocessor = dict(
    type='SegDataPreProcessor',
    mean=[127.69334815],
    std=[84.08337247],
    bgr_to_rgb=False,
    pad_val=0,
    size=IMG_SIZE,
    seg_pad_val=255)
model = dict(
    type='EncoderDecoder',
    data_preprocessor=data_preprocessor,
    pretrained=None,
    backbone=dict(
        type='UNet',
        in_channels=1,
        base_channels=64,
        num_stages=5,
        strides=(1, 1, 1, 1, 1),
        enc_num_convs=(2, 2, 2, 2, 2),
        dec_num_convs=(2, 2, 2, 2),
        downsamples=(True, True, True, True),
        enc_dilations=(1, 1, 1, 1, 1),
        dec_dilations=(1, 1, 1, 1),
        with_cp=False,
        conv_cfg=None,
        act_cfg=dict(type='ReLU'),
        upsample_cfg=dict(type='InterpConv'),
        norm_eval=False),
    decode_head=dict(
        type='ASPPHead',
        in_channels=64,
        in_index=4,
        channels=16,
        dilations=(1, 12, 24, 36),
        dropout_ratio=0.1,
        num_classes=N_CLASSES,
        align_corners=False,
        loss_decode=[
            dict(type='OhemCrossEntropy',
                 loss_weight=1.0),
            dict(type='DiceLoss',
                 loss_weight=3.0),
        ]),
    auxiliary_head=[
        dict(
            type='FCNHead',
            in_channels=128,
            in_index=3,
            channels=64,
            num_convs=1,
            concat_input=False,
            dropout_ratio=0.1,
            num_classes=N_CLASSES,
            align_corners=False,
            loss_decode=[
                dict(type='OhemCrossEntropy',
                     loss_weight=0.5),
            ]),
        dict(
            type='FCNHead',
            in_channels=256,
            in_index=2,
            channels=64,
            num_convs=1,
            concat_input=False,
            dropout_ratio=0.1,
            num_classes=N_CLASSES,
            align_corners=False,
            loss_decode=[
                dict(type='OhemCrossEntropy',
                     # class_weight=class_weights,
                     loss_weight=0.5),
            ]),
        dict(
            type='FCNHead',
            in_channels=512,
            in_index=1,
            channels=64,
            num_convs=1,
            concat_input=False,
            dropout_ratio=0.1,
            num_classes=N_CLASSES,
            align_corners=False,
            loss_decode=[
                dict(type='OhemCrossEntropy',
                     # class_weight=class_weights,
                     loss_weight=0.5),
            ]),
    ],
    # model training and testing settings
    train_cfg=dict(),
    test_cfg=dict(mode='whole'))

# data
train_pipeline = [
    dict(type='LoadImageFromFile',
         color_type="grayscale"),
    dict(
        type='LoadAnnotations',
    ),
    dict(type='RandomFlip', prob=0.5,
         direction=['horizontal', 'vertical']),
    dict(type='PhotoMetricDistortion',
         contrast_range=(0.8, 1.2),
         brightness_delta=25),
    dict(type='Resize', scale=IMG_SIZE, keep_ratio=True),  # Pipeline that resizes the images
    dict(type='PackSegInputs')
]

val_pipeline = [
    dict(type='LoadImageFromFile',
         color_type="grayscale"),
    dict(
        type='LoadAnnotations',
    ),
    dict(type='Resize', scale=IMG_SIZE, keep_ratio=True),  # Pipeline that resizes the images
    dict(type='PackSegInputs')
]

test_pipeline = val_pipeline

train_dataloader = dict(
    batch_size=BATCH_SIZE,
    num_workers=4,
    persistent_workers=True,
    drop_last=False,
    sampler=dict(  # training data sampler
        type='DefaultSampler',
        shuffle=True),
    dataset=dict(
        type=dataset_type,
        data_root=DATA_ROOT,
        data_prefix=dict(img_path='img_dir/train',
                         seg_map_path='ann_dir/train'),
        # reduce_zero_label=True,
        metainfo=dict(classes=classes, palette=palette),
        pipeline=train_pipeline,
    )
)

val_dataloader = dict(
    batch_size=VAL_BATCH_SIZE,
    num_workers=4,
    persistent_workers=True,
    drop_last=False,
    sampler=dict(
        type='DefaultSampler',
        shuffle=False),  # not shuffle during validation and testing
    dataset=dict(
        type=dataset_type,
        data_root=DATA_ROOT,
        data_prefix=dict(img_path='img_dir/val',
                         seg_map_path='ann_dir/val'),
        # reduce_zero_label=True,
        metainfo=dict(classes=classes, palette=palette),
        pipeline=val_pipeline,
    )
)

val_evaluator = [
    dict(
        type='IoUMetric',
        iou_metrics=['mIoU']),
    dict(
        type='SleeperClassMetric'
    )
]

train_cfg = dict(type='EpochBasedTrainLoop', max_epochs=1000, val_interval=1)
val_cfg = dict(type='ValLoop')

param_scheduler = [
    dict(
        type='LinearLR', start_factor=1e-6, by_epoch=False, begin=0, end=23 * 2),
    dict(type='ReduceOnPlateauLR',
         monitor='mIoU',
         rule='greater',
         patience=15,
         cooldown=0,
         begin=30,
         threshold=0.005,
         verbose=True),
]

optim_wrapper = dict(
    type='AmpOptimWrapper',
    # _delete_=True,
    # optimizer=dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0005),
    optimizer=dict(type='AdamW', lr=1e-3, weight_decay=0.01, amsgrad=True),
    dtype='float16',
    loss_scale=512.
)

log_config = dict(
    by_epoch=True,
    hooks=[dict(type='TextLoggerHook'),
           dict(type='TensorboardLoggerHook')])

default_hooks = dict(
    checkpoint=dict(
        type="CheckpointHook",
        save_best=["mIoU", "mAcc"],
        rule="greater",
        max_keep_ckpts=1
    ),
    early_stopping=dict(
        type="EarlyStoppingHook",
        monitor="mIoU",
        patience=60,
        min_delta=0.001),
    visualization=dict(type='SegVisualizationHook', draw=True, interval=5),
    logger=dict(type='LoggerHook', interval=50),
)

vis_backends = [dict(type='LocalVisBackend'),
                dict(type='TensorboardVisBackend')
                ]
visualizer = dict(
    type='SegLocalVisualizer', vis_backends=vis_backends, name='visualizer')

log_processor = dict(
    type='LogProcessor',  # Log processor to process runtime logs
    window_size=50,  # Smooth interval of log values
    by_epoch=True)  # Whether to format logs with epoch type. Should be consistent with the train loop's type.

log_level = 'INFO'

Environment

04/08 14:15:47 - mmengine - INFO - **********Environmental information**********
04/08 14:16:05 - mmengine - INFO - sys.platform: win32
04/08 14:16:05 - mmengine - INFO - Python: 3.12.2 | packaged by Anaconda, Inc. | (main, Feb 27 2024, 17:28:07) [MSC v.1916 64 bit (AMD64)]
04/08 14:16:05 - mmengine - INFO - CUDA available: True
04/08 14:16:05 - mmengine - INFO - MUSA available: False
04/08 14:16:05 - mmengine - INFO - numpy_random_seed: 2147483648
04/08 14:16:05 - mmengine - INFO - GPU 0: NVIDIA GeForce RTX 3060
04/08 14:16:05 - mmengine - INFO - CUDA_HOME: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.1
04/08 14:16:05 - mmengine - INFO - NVCC: Cuda compilation tools, release 12.1, V12.1.105
04/08 14:16:05 - mmengine - INFO - MSVC: Microsoft (R) C/C++ Optimizing Compiler Version 19.35.32215 for x64
04/08 14:16:05 - mmengine - INFO - GCC: n/a
04/08 14:16:05 - mmengine - INFO - PyTorch: 2.2.1+cu121
04/08 14:16:05 - mmengine - INFO - PyTorch compiling details: PyTorch built with:
  - C++ Version: 201703
  - MSVC 192930151
  - Intel(R) Math Kernel Library Version 2020.0.2 Product Build 20200624 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v3.3.2 (Git Hash 2dc95a2ad0841e29db8b22fbccaf3e5da7992b01)
  - OpenMP 2019
  - LAPACK is enabled (usually provided by MKL)
  - CPU capability usage: AVX2
  - CUDA Runtime 12.1
  - NVCC architecture flags: -gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute
_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_90,code=sm_90
  - CuDNN 8.8.1  (built against CUDA 12.0)
  - Magma 2.5.4
  - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=12.1, CUDNN_VERSION=8.8.1, CXX_COMPILER=C:/actions-runner/_work/pytorch/pytorch/builder/windows/tmp_bin/sccache-cl.exe, CXX_FLAGS=/DWIN32 /D_WINDOWS /GR /EHsc /Zc
:__cplusplus /bigobj /FS /utf-8 -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE /wd4624 /wd4068 /wd4067 /wd4267 /wd4661 /wd4717 /wd42
44 /wd4804 /wd4273, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=2.2.1, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF,
 USE_NCCL=OFF, USE_NNPACK=OFF, USE_OPENMP=ON, USE_ROCM=OFF, USE_ROCM_KERNEL_ASSERT=OFF,

04/08 14:16:05 - mmengine - INFO - TorchVision: 0.17.1+cu121
04/08 14:16:05 - mmengine - INFO - OpenCV: 4.9.0
04/08 14:16:05 - mmengine - INFO - MMEngine: 0.10.3
04/08 14:16:05 - mmengine - INFO - MMCV: 2.0.0rc4
04/08 14:16:05 - mmengine - INFO - MMCV Compiler: MSVC 193532215
04/08 14:16:05 - mmengine - INFO - MMCV CUDA Compiler: 12.1
04/08 14:16:05 - mmengine - INFO - MMDeploy: 1.3.1+bc75c9d
04/08 14:16:05 - mmengine - INFO -

04/08 14:16:05 - mmengine - INFO - **********Backend information**********
04/08 14:16:05 - mmengine - INFO - tensorrt:    10.0.0b6
04/08 14:16:05 - mmengine - INFO - tensorrt custom ops: NotAvailable
04/08 14:16:05 - mmengine - INFO - ONNXRuntime: 1.17.1
04/08 14:16:05 - mmengine - INFO - ONNXRuntime-gpu:     None
04/08 14:16:05 - mmengine - INFO - ONNXRuntime custom ops:      NotAvailable
04/08 14:16:05 - mmengine - INFO - pplnn:       None
04/08 14:16:06 - mmengine - INFO - ncnn:        None
04/08 14:16:06 - mmengine - INFO - snpe:        None
04/08 14:16:06 - mmengine - INFO - openvino:    None
04/08 14:16:06 - mmengine - INFO - torchscript: 2.2.1+cu121
04/08 14:16:06 - mmengine - INFO - torchscript custom ops:      NotAvailable
04/08 14:16:06 - mmengine - INFO - rknn-toolkit:        None
04/08 14:16:06 - mmengine - INFO - rknn-toolkit2:       None
04/08 14:16:06 - mmengine - INFO - ascend:      None
04/08 14:16:06 - mmengine - INFO - coreml:      None
04/08 14:16:06 - mmengine - INFO - tvm: None
04/08 14:16:06 - mmengine - INFO - vacc:        None
04/08 14:16:06 - mmengine - INFO -

04/08 14:16:06 - mmengine - INFO - **********Codebase information**********
04/08 14:16:06 - mmengine - INFO - mmdet:       None
04/08 14:16:06 - mmengine - INFO - mmseg:       1.2.2
04/08 14:16:06 - mmengine - INFO - mmpretrain:  None
04/08 14:16:06 - mmengine - INFO - mmocr:       None
04/08 14:16:06 - mmengine - INFO - mmagic:      None
04/08 14:16:06 - mmengine - INFO - mmdet3d:     None
04/08 14:16:06 - mmengine - INFO - mmpose:      None
04/08 14:16:06 - mmengine - INFO - mmrotate:    None
04/08 14:16:06 - mmengine - INFO - mmaction:    None
04/08 14:16:06 - mmengine - INFO - mmrazor:     None
04/08 14:16:06 - mmengine - INFO - mmyolo:      None

Error traceback

No response

shiomi326 commented 4 months ago

I have same issue.

shiomi326 commented 4 months ago

@erogodsky Inference can be made for any batch size by specifying the following. (though the size is not explicitly displayed in the onnx file)

    dynamic_axes={
        'input': {
            0: 'batch',
            2: 'height',
            3: 'width'
        },
        'output': {
            0: 'batch',
            2: 'height',
            3: 'width'
        },
    }