wasidennis / AdaptSegNet

Learning to Adapt Structured Output Space for Semantic Segmentation, CVPR 2018 (spotlight)
849 stars 203 forks source link

How to reproduce SYNTHIA-to-CityScapes #31

Open xiaosean opened 5 years ago

xiaosean commented 5 years ago

Hi, thank you code and paper :) I am curious about SYNTHIA-to-CityScapes, I have used GTA5-to-CityScapes get a better result, However, I want to reproduce SYNTHIA-to-CityScapes. I noticed that you follow the [3] setting which use 13 classes to evaluate. Unfortunately [3] didn't release code, So could you provide your SYNTHIA-to-CityScapes experiments materials(dataloader, evaluated)?

[3] Y.-H. Chen, W.-Y. Chen, Y.-T. Chen, B.-C. Tsai, Y.-C. Frank Wang, and M. Sun. No More Discrimination: Cross City Adaptation of Road Scene Segmenters. In ICCV 2017.

Serge-weihao commented 5 years ago

@xiaosean can you load the origin ground truth of 23 classes in SYNTHIA-RAND-CITYSCAPES correctly? I can not map the data in GT/LABELS into 23 classes mentioned in README.txt.

xiaosean commented 5 years ago

@Serge-weihao I use the same way as GTA5-to-cityscapes to train. It means that I use 19 classes to train. I try to use 13 classes to train, it will get a very worse result.

You can refer to my reproduced data loader, It is still messy, I haven't done code clean yet. https://github.com/xiaosean/AdaptSegNet/blob/dev/dataset/synthia_dataset.py

or you can refer this repo: https://github.com/stu92054/Domain-adaptation-on-segmentation/blob/master/Adapt_Structured_Output/dataset/Synthia_dataset.py

Serge-weihao commented 5 years ago

@xiaosean I read your code yet. You may use the data in GT/COLOR. But the README.txt says, " GT/COLOR: folder containing png files (one per image). Annotations are given using a color representation. This is mainly provided for visualization and you are not supposed to use them for training.". What's more, the qualities of png files in GT/COLOR are not good. So I am really confused.

xiaosean commented 5 years ago

@Serge-weihao I will confirm it when I have time in recent days. I remember that I tried to use the GT dataset for a long time, but failed, so I used this alternative.

Serge-weihao commented 5 years ago

0000002 @xiaosean the file is in Panoptic Segmentation style, but I can not find out the way of mapping from (R,G,B) to classes. This png file has 86 kinds of (R,G,B) tuples(person, e.g. has more than one color). How can your code map them into those subsets of 23 classes?

Serge-weihao commented 5 years ago

@wasidennis Can you release your code in SYNTHIA-to-CityScapes setting in your paper? Did you use SYNTHIA-RAND-CITYSCAPES frrom this link ?

xiaosean commented 5 years ago

@Serge-weihao I use the same dataset as your link, SYNTHIA_RAND_CITYSCAPES, about 20GB. I found that I use GT/LABELS dir instead of GT/COLOR. You can take a look below code. image

Btw, I try to use pillow library to load the LABELS.png, However, failed... Currently, I use opencv to load the LABELS.png.

wasidennis commented 5 years ago

@Serge-weihao yes, it is the one from SYNTHIA-RAND-CITYSCAPES. We use almost the same setting as in GTA5. Just note that, the input size of SYNTHIA images is resized to [1280, 760] and the total iteration for training is 94,000.

Serge-weihao commented 5 years ago

@wasidennis Did you train the SYNTHIA-to-CityScapes model with 13 classes and evaluated with 13 classes or you trained the model with 19 classes and evaluated with 13 classes?

wasidennis commented 5 years ago

The model was trained on 19 classes and we show 13-category result following the Cross-City paper at that time.

Here is the complete 19-category result we have: ===>road: 84.34 ===>sidewalk: 42.68 ===>building: 77.46 ===>wall: 9.34 ===>fence: 0.24 ===>pole: 22.85 ===>light: 4.66 ===>sign: 6.98 ===>vegetation: 77.87 ===>terrain: 0.0 ===>sky: 82.52 ===>person: 54.34 ===>rider: 21.01 ===>car: 72.27 ===>truck: 0.0 ===>bus: 32.16 ===>train: 0.0 ===>motocycle: 18.89 ===>bicycle: 32.27 ===> mIoU: 33.68

ETHRuiGong commented 5 years ago

Hello, @xiaosean, could you reproduce the multi-level SYNTHIA to Cityscapes result which is 46.7 in the paper? Thanks a lot!

xiaosean commented 5 years ago

@ETHRuiGong Nope, I haven't tried it yet.

yingzicy commented 4 years ago

Hi, can anyone help me out? I tried to download SYNTHIA-to-CityScapes from the website but found that it only contains depth folder without images and labels. Plus, I also downloaded GTA5 but only contains color label rather than label id. Could you please share them with me? Thanks a lot!

lolinkun commented 4 years ago

Hi, I am really want to share you with my datasets, however, since I didn't work on this direction for a long time and the dataset is so large, thus, I delete them from my PC. So sorry for which I can not help you. May be you can find them on the web? Since they are so popular dataset and widely applied in a series of research.

Yours

------------------ 原始邮件 ------------------ 发件人: "yingzicy"<notifications@github.com>; 发送时间: 2019年12月5日(星期四) 晚上11:38 收件人: "wasidennis/AdaptSegNet"<AdaptSegNet@noreply.github.com>; 抄送: "鲲鹏鸡翅"<289553459@qq.com>;"Comment"<comment@noreply.github.com>; 主题: Re: [wasidennis/AdaptSegNet] How to reproduce SYNTHIA-to-CityScapes (#31)

Hi, can anyone help me out? I tried to download SYNTHIA-to-CityScapes from the website but found that it only contains depth folder without images and labels. Plus, I also downloaded GTA5 but only contains color label rather than label id. Could you please share them with me? Thanks a lot!

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe.

785256592 commented 4 years ago

@Serge-weihao yes, it is the one from SYNTHIA-RAND-CITYSCAPES. We use almost the same setting as in GTA5. Just note that, the input size of SYNTHIA images is resized to [1280, 760] and the total iteration for training is 94,000.

Dear @wasidennis hehllo,I follow your advice to resize the SYNTHIA [1280,760] and the total iteration for training is 94,000.I used the Dataloader as the same with GTA5_dataset ,but it also can't reproduce SYNTHIA-to-CityScapes.Can you share the SYNTHIA-to-CityScapes code with me . best regards!

785256592 commented 4 years ago

@Serge-weihao I use the same way as GTA5-to-cityscapes to train. It means that I use 19 classes to train. I try to use 13 classes to train, it will get a very worse result.

You can refer to my reproduced data loader, It is still messy, I haven't done code clean yet. https://github.com/xiaosean/AdaptSegNet/blob/dev/dataset/synthia_dataset.py

or you can refer this repo: https://github.com/stu92054/Domain-adaptation-on-segmentation/blob/master/Adapt_Structured_Output/dataset/Synthia_dataset.py

Dear @xiaosean hi,I cited https://github.com/stu92054/Domain-adaptation-on-segmentation/blob/master/Adapt_Structured_Output/dataset/Synthia_dataset.py code,but i found a problem the SYNTHIA's Labels is 4D,but we knew the GTA5 Labels is 3D,can you please show me how to handle this question. BTW