Open rafaelbou opened 2 years ago
Hi, sorry for late reply. In our default training and testing(if not using --aug-test
), MultiScaleFlipAug
would not be used.
For validation phase, you just need to follow test pipeline. For example, https://github.com/open-mmlab/mmsegmentation/blob/7512f05990eb66bba3653cb4d5f478965bf41bd7/configs/_base_/datasets/ade20k.py#L48
It would follow test_pipeline
in the same config. You just need to set flip=False
.
For test phase, if you do not use --aug-test
, MultiScaleFlipAug
would not be used. If it is used, some values would be set here:
https://github.com/open-mmlab/mmsegmentation/blob/master/tools/test.py#L131-L135
Your problem is caused by your certain uncomment & adding modifications in configs.
Hi, I have a question, how do you evaluate when you don't do multiscalefilp aug? (on ade20k evaluation)
Or are you evaluating it in a different way?
Hi all, Two questions about
MultiScaleAug
:MultiScaleAug
during validate (while training):MultiScaleAug
during test:Thanks.
My config file with
MultiScaleAug
:`base = '/mmsegmentation/configs/hrnet/fcn_hr18_512x1024_160k_cityscapes.py'
convert dataset annotation to semantic segmentation map
data_root = '/mmsegmentation/data' img_dir = 'images' ann_dir = 'labels'
Since we use ony one GPU, BN is used instead of SyncBN
norm_cfg = dict(type='BN', requires_grad=True) model = dict(
type='fcn_hr18',
)
We can still use the pre-trained Mask RCNN model though we do not need to
use the mask branch
load_from = '/mmsegmentation/checkpoints/fcn_hr18_512x1024_160k_cityscapes_20200602_190822-221e4a4f.pth'
Set up working dir to save files and logs.
cfg.work_dir = './work_dirs/benchmark_train'
data = dict( samples_per_gpu=2, # Batch size of a single GPU workers_per_gpu=2, # Worker to pre-fetch data for each single GPU train=dict( # Train dataset config
type='CityscapesDataset', # Type of dataset, refer to mmseg/datasets/ for details.
My config file without
MultiScaleAug
(validation is crushed):`base = '//mmsegmentation/configs/hrnet/fcn_hr18_512x1024_160k_cityscapes.py'
convert dataset annotation to semantic segmentation map
data_root = './annotations' img_dir = 'images' ann_dir = 'labels'
Since we use ony one GPU, BN is used instead of SyncBN
norm_cfg = dict(type='BN', requires_grad=True) model = dict(
type='fcn_hr18',
)
We can still use the pre-trained Mask RCNN model though we do not need to
use the mask branch
load_from = '/checkpoints/pretrained_nns/hrnetv2_w18-00eb2006.pth'
Set up working dir to save files and logs.
cfg.work_dir = './work_dirs/benchmark_train'
work_dir = '/wo_MultiScaleFlipAug'
data = dict( samples_per_gpu=2, # Batch size of a single GPU workers_per_gpu=2, # Worker to pre-fetch data for each single GPU train=dict( # Train dataset config
type='CityscapesDataset', # Type of dataset, refer to mmseg/datasets/ for details.