Open parquets opened 3 years ago
Can you tell me the devices for running code, 1 Tesla V100-32GB or 2 Tesla T4?
1 2080ti, I use the config in syn2cityscapes_t4/run_task/warmup_at
The latest config "syn2cityscapes_t4" is used for running code on 2 Tesla T4. For 1 2080ti, I suggest you to change commit to 2eadc081c810b6780fd046d17401083a816e64f5
for runing code. We will check and rerun it on 1 Tesla T4 later. If you have any question, please letter us to know.
thank you. This is the log file currently, That may help you.
Sorry to trouble you again, I use one v100 and run the bash 'run_syn2cityscapes_self_traing_v100.sh' for the whole project. I got similar result in the IAST paper. But I find in the warmop_at stage, the miou increase slightly compared with the source only result. The addition file is the train log. I want to know if you removing the data aug before you testing?
We test without data augmentation, please see: https://github.com/Raykoooo/IAST/blob/24db9912403a01faf34884d60c9778cd48651bde/code/sseg/workflow/eval.py#L57 https://github.com/Raykoooo/IAST/blob/24db9912403a01faf34884d60c9778cd48651bde/code/sseg/datasets/loader/dataset.py#L24
And your phenomenon is similar with us, for runing code on 2 Tesla T4 with the latest config "syn2cityscapes_t4", the best mIoU(16 classes) is 0.3866 for source only and 0.3884 for warmup.
I have tried to adaptive the code into the deeplabv3plus model and I think the config file for synthia2cityscapes need to be modified. I check the codes of IAST and AdaptSegNet, and do the adv training at the begining of the training, modify the discriminator weight to 0.01. And finally, I got 34miou for 19 classes. The additional file is the training log. I think for the synthia dataset, more parameters adjustment should be done.
Sorry to trouble you again. Thank you for your excellent work, I have adapted the IAST into other model. But I still have one problem. I try to adv train the model and get similar miou with you. However, I download the warmup model your provide and test it, I find the model perform more balance than my adv train result. I mean some hard class like "train" perform much better than mine adv train result. I got the "train" class miou no more than 10. Do you have any trick while warmup?
I have tried to adaptive the code into the deeplabv3plus model and I think the config file for synthia2cityscapes need to be modified. I check the codes of IAST and AdaptSegNet, and do the adv training at the begining of the training, modify the discriminator weight to 0.01. And finally, I got 34miou for 19 classes. The additional file is the training log. I think for the synthia dataset, more parameters adjustment should be done.
Sorry for not replying to you in time. Please note that there are only 16 or 13 categories in SYNTHIA-to-Cityscapes, while the mIoU in the log is the mean of 19 categories, you should recalculate it. And the reported results of SYNTHIA-to-Cityscapes are without careful parameter adjustment, it may not be the best.
Sorry to trouble you again. Thank you for your excellent work, I have adapted the IAST into other model. But I still have one problem. I try to adv train the model and get similar miou with you. However, I download the warmup model your provide and test it, I find the model perform more balance than my adv train result. I mean some hard class like "train" perform much better than mine adv train result. I got the "train" class miou no more than 10. Do you have any trick while warmup?
Actually, we can not explain why the performance of train
is so good during adversarial training. We apply the augmentation with HorizontalFlip
and RandomSizedCrop
in all experiments, which is different from the augmentation used in other papers, we guess that this will affect the final results. This may be helpful for you, if you have found something new, please let us know.
hi, I meet some problem in warmup_at stage (the second stage). The miou of the model didn't increase compare to the source only model. The value of miou decreased and increased once and once and the best model is 32.63, no better than source only result. I can't find what is wrong. In addition, I find the bn layers are not frozen correctly.