Closed HCShi closed 2 years ago
Hi, the difference between Mapillary Visita and COCO in config is given below:
MODEL:
TENSOR_DIM: 400
POSITION_HEAD:
THING:
POS_NUM: 3
NUM_CLASSES: 37
THRES: 0.2
TOP_NUM: 200
STUFF:
NUM_CLASSES: 29
THRES: 0.1
SEM_SEG_HEAD:
NUM_CLASSES: 29
KERNEL_HEAD:
INSTANCE_SCALES: ((1, 128), (64, 256), (128, 512), (256, 1024), (512, 4096),)
TEST_SCALES: ((1, 128), (128, 256), (256, 512), (512, 1024), (1024, 4096),),
INFERENCE:
INST_THRES: 0.5
SIMILAR_THRES: 0.95
DATASETS:
TRAIN: ("mapillary_train_panoptic_separated",) # generate before
TEST: ("mapillary_val_panoptic_separated",) # generate before
SOLVER:
BASE_LR: 0.02
CLIP_GRADIENTS:
CLIP_VALUE: 15.0
IMS_PER_BATCH: 32
MAX_ITER: 150000
INPUT:
MIN_SIZE_TRAIN: (1024, 1280, 1408, 1536, 1664, 1792, 1920, 2048)
MIN_SIZE_TEST: 2048
MAX_SIZE_TRAIN: 4096
MAX_SIZE_TEST: 2048
CROP:
ENABLED: True
TYPE: "absolute" # use with instance crop, which include instance in each random crop
SIZE: (1024, 1024)
Hi, the difference between Mapillary Visita and COCO in config is given below:
MODEL: TENSOR_DIM: 400 POSITION_HEAD: THING: POS_NUM: 3 NUM_CLASSES: 37 THRES: 0.2 TOP_NUM: 200 STUFF: NUM_CLASSES: 29 THRES: 0.1 SEM_SEG_HEAD: NUM_CLASSES: 29 KERNEL_HEAD: INSTANCE_SCALES: ((1, 128), (64, 256), (128, 512), (256, 1024), (512, 4096),) TEST_SCALES: ((1, 128), (128, 256), (256, 512), (512, 1024), (1024, 4096),), INFERENCE: INST_THRES: 0.5 SIMILAR_THRES: 0.95 DATASETS: TRAIN: ("mapillary_train_panoptic_separated",) # generate before TEST: ("mapillary_val_panoptic_separated",) # generate before SOLVER: BASE_LR: 0.02 CLIP_GRADIENTS: CLIP_VALUE: 15.0 IMS_PER_BATCH: 32 MAX_ITER: 150000 INPUT: MIN_SIZE_TRAIN: (1024, 1280, 1408, 1536, 1664, 1792, 1920, 2048) MIN_SIZE_TEST: 2048 MAX_SIZE_TRAIN: 4096 MAX_SIZE_TEST: 2048 CROP: ENABLED: True TYPE: "absolute" # use with instance crop, which include instance in each random crop SIZE: (1024, 1024)
Thanks for the reply! If I want to train on this dataset, should I prepare the dataset structure like follows?
mapillary_vistas/
training/
images/
instances/
labels/
panoptic/
validation/
images/
instances/
labels/
panoptic/
Whether it is convenient for you to upload the corresponding files on training Mapillary dataset, thanks!
Yes, the dataset structure is right. And the whole dataset is too large. Maybe you can refer to official data generation script and use the COCO-style to prepare the dataset.
Yes, the dataset structure is right. And the whole dataset is too large. Maybe you can refer to official data generation script and use the COCO-style to prepare the dataset.
Thanks a lot! I'll try the scripts.
Yes, the dataset structure is right. And the whole dataset is too large. Maybe you can refer to official data generation script and use the COCO-style to prepare the dataset.
Hi, I have generated the dataset structure followed the official, need I follow the command python datasets/prepare_panoptic_fpn.py
, to extract semantic annotations from panoptic annotations? It seems that this dataset is not compatible with COCO.
Hi, how to optimize 'Thing' and 'Stuff' is up to you. In prepare_panoptic_fpn.py
, they view all things as a super-category for semantic segmentation. In our approach, we follow that in prepare_panoptic_fpn.py
. The COCO-style JSON is suggested to be generated for prepare_panoptic_fpn.py
by yourself.
Hi, how to optimize 'Thing' and 'Stuff' is up to you. In
prepare_panoptic_fpn.py
, they view all things as a super-category for semantic segmentation. In our approach, we follow that inprepare_panoptic_fpn.py
. The COCO-style JSON is suggested to be generated forprepare_panoptic_fpn.py
by yourself.
Thanks a lot!
Could you please provide the training config files of Mapillary dataset?