open-mmlab / mmsegmentation

OpenMMLab Semantic Segmentation Toolbox and Benchmark.
https://mmsegmentation.readthedocs.io/en/main/
Apache License 2.0
8.21k stars 2.6k forks source link

how work val pipeline ? #1767

Open zhijiejia opened 2 years ago

zhijiejia commented 2 years ago
test_pipeline = [
    dict(type='LoadImageFromFile'),
    dict(
        type='MultiScaleFlipAug',
        img_scale=(2048, 512),
        # img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75],
        flip=False,
        transforms=[
            dict(type='Resize', keep_ratio=True),
            dict(type='RandomFlip'),
            dict(type='Normalize', **img_norm_cfg),
            dict(type='ImageToTensor', keys=['img']),
            dict(type='Collect', keys=['img'])
        ])
]

data = dict(
    # num_gpus: 8 -> batch_size: 8
    samples_per_gpu=8,
    train=dict(data_root=data_root, pipeline=train_pipeline),
    val=dict(data_root=data_root, pipeline=test_pipeline),
    test=dict(data_root=data_root, pipeline=test_pipeline)
)
  1. the val pipeline is test_pipeline, but in test pipeline, the prob in RandomFlip not be define, but the truth is the prob of RandomFlip equal 0.5 ??? when and where and who set the prob in RandomFlip.

  2. I print some flag in mmseg/datasets/pipelines/test_time_aug.py MultiScaleFlipAug but the flag don't show in my terminal as my expect. It shows no run the class of MultiScaleFlipAug, but the val pipeline is defined as MultiScaleFlipAug ???? This is too messy.

  3. why you allow the prob of RandomFlip can be none in the init of RandomFlip, that is not reasonable?

xu19971109 commented 2 years ago

I have a similar question. I need to calculate the IOU of a single picture when inference pictures, but I find that the same picture runs repeatedly with the same model, the results are actually different! Then I tried to use np.unique (result[0]) for statistics, in my binary task, I found that the result was sometimes 0 and sometimes [0,1], although flip was set to false by default. But why are the results so different from the same graph?

The following results are all from the same model and the same image This is the IOU calculation I wrote myself image image image

This is the IOU calculation function I use in mmseg image image image

The iOU value is the same, which means that result = inference_segmentor (model, img) returns a different result each time