lhoyer / DAFormer

[CVPR22] Official Implementation of DAFormer: Improving Network Architectures and Training Strategies for Domain-Adaptive Semantic Segmentation
Other
470 stars 93 forks source link

DarkZurich #52

Closed mkbkxdd closed 2 years ago

mkbkxdd commented 2 years ago

Hi, thank you for providing the code!

This is how you arrange DarkZurich dataset in readme: dark_zurich (optional) │ │ ├── gt │ │ │ ├── val │ │ ├── rgb_anon │ │ │ ├── train │ │ │ ├── val

but in configs/base/datasets/uda_cityscapes_to_darkzurich_512x512.py is this:

target=dict( type='DarkZurichDataset', data_root='data/dark_zurich/', img_dir='rgb_anon/train/night/', ann_dir='gt/train/night/',

however I don't find the gt of the train/night in DarkZurich dataset,

I am very confused about this. Could you help me solve it? Thank you very much!

lhoyer commented 2 years ago

You can ignore ann_dir='gt/train/night/' in the config file, because dark_zurich_train_pipeline does not have an annotation loader as there is no dict(type='LoadAnnotations') in the pipeline. Therefore, the directory dark_zurich/gt/train/night/ is not required to exist.

YYDSDD commented 2 years ago

Thank you very much for your reply, and I have another problem: image when I run cityscapes 2 dark, the IoU of the truck, train and bus are like this, and the mIoU is very low. Do you know why this happens?

lhoyer commented 2 years ago

For Cityscapes→ACDC and Cityscapes→DarkZurich the results reported in the paper are calculated on the test split. For DarkZurich, the performance significantly differs between validation and test split. Please, have a look at the README.md on how to obtain the test mIoU.

The checkpoint, we used to obtain the test results (you can find the link in the README.md), had the following validation performance (the complete logs are provided with the checkpoint):

+---------------+-------+-------+ | Class | IoU | Acc | +---------------+-------+-------+ | road | 91.24 | 96.91 | | sidewalk | 60.69 | 78.81 | | building | 59.06 | 84.16 | | wall | 25.52 | 36.2 | | fence | 46.12 | 51.49 | | pole | 42.42 | 62.06 | | traffic light | 41.13 | 81.56 | | traffic sign | 21.16 | 24.06 | | vegetation | 51.84 | 86.58 | | terrain | 34.2 | 45.1 | | sky | 34.06 | 36.03 | | person | 23.66 | 25.85 | | rider | 36.35 | 37.86 | | car | 66.1 | 87.52 | | truck | 0.0 | nan | | bus | 0.0 | nan | | train | 0.0 | 0.0 | | motorcycle | 30.82 | 35.44 | | bicycle | 40.17 | 57.77 | +---------------+-------+-------+ 2022-07-26 01:43:28,220 - mmseg - INFO - Summary: 2022-07-26 01:43:28,220 - mmseg - INFO - +-------+-------+-------+ | aAcc | mIoU | mAcc | +-------+-------+-------+ | 72.46 | 37.08 | 54.55 | +-------+-------+-------+

Given that the validation variance is quite high between different seeds on the validation set of DarkZurich, your results actually look quite good. The provided checkpoint is the one with the median validation performance from the seeds [0, 1, 2].

YYDSDD commented 2 years ago

Thank you very much for your reply! Now I understand!