liyunsheng13 / BDL

MIT License
222 stars 30 forks source link

About the GTA2Cityscapes images and SSL model #18

Closed zhyx12 closed 5 years ago

zhyx12 commented 5 years ago

Thanks for sharing your code. I want to know the iteration number of provided GTA2Cityscapes dataset, SSL_step1 and SSL_step2 model, is K=1 or K=2? or after training, will we get mIoU of 47.2 or 48.5?

liyunsheng13 commented 5 years ago

I'm sorry your question is not very clear to me. Could you provide more details such as which is the iteration number you mean?

zhyx12 commented 5 years ago

The k here means the outer iteration number in Algorithm1. Also in Table 2, line 4 (k=1) get 47.2, and line 7 (k=2) get 48.5

liyunsheng13 commented 5 years ago

What is the exact question you want to ask? I'm still confused. What is the iterations number you refer to and what is the GTA2Cityscapes dataset you refere to?

zhyx12 commented 5 years ago

Sorry for the uncleared description. GTA2Cityscapes dataset means the link GTA5 as Cityscapes you provided

liyunsheng13 commented 5 years ago

The translated images are based on the model with mIoU 47.2

zhyx12 commented 5 years ago

Thanks for your reply, as the translated images based on adaptation model with mIoU 47.2, is it means we just need to train adaptation model and don't need to train Style transfer model(CycleGAN) again to get the final result(48.5)?

liyunsheng13 commented 5 years ago

Yes. I provide a simple way to reproduce the best result in my paper. But if you are interested in reproducing the whole training process, you can train CycleGAN with the code I uploaded.

zhyx12 commented 5 years ago

Get it. Thank you so much for your attention and patience.

zhyx12 commented 5 years ago

One more question, when training SSL model(with pseudo label), one argument is "--init-weights /path/to/inital_weights", should we train from pretrained DeepLab-V2 or finetune the segmentation model get from previous adaptation step(without pseudo label). Item 3 in Issue 16 use the word finetune.

liyunsheng13 commented 5 years ago

You can use the pretrained DeepLab-V2. You can also train DeepLab-V2 with synthetic data first and then use it as initial model. The results make no difference

wj-zhang commented 4 years ago

One more question, when training SSL model(with pseudo label), one argument is "--init-weights /path/to/inital_weights", should we train from pretrained DeepLab-V2 or finetune the segmentation model get from previous adaptation step(without pseudo label). Item 3 in Issue 16 use the word finetune.

Hi! Do you reproduce the best result in the paper? What initial weights do you use, is it pretrained DeepLab-V2?

zhyx12 commented 4 years ago

I use the SSL_step2 model as initial weights which can achieve about 47.37 (mIoU). I test the mIoU every 2k iterations and the best result I can get is 48.3~48.5.

wj-zhang commented 4 years ago

I use the SSL_step2 model as initial weights which can achieve about 47.37 (mIoU). I test the mIoU every 2k iterations and the best result I can get is 48.3~48.5.

Thank you very much for your reply! I used the DeepLab-V2 model as initial weights and obtained 48.17 at the last iteration (120000). I will try your setting and thanks again!