Open KaiiZhang opened 3 years ago
Heyy man...We are also trying to reproduce the results but in GTA->Cityscapes the evaluated miou is mostly coming 1-2 percent less than the reported value. Did you try to eval on the gta dataset? Also how much difference did you get in the results? Thanks!
Heyy man...We are also trying to reproduce the results but in GTA->Cityscapes the evaluated miou is mostly coming 1-2 percent less than the reported value. Did you try to eval on the gta dataset? Also how much difference did you get in the results? Thanks!
Yes, you are right. We faced the same problem in GTA->Cityscapes. In GTA->Cityscapes, we have overcome this problem with some additional trikes, such as: bigger input size, bigger batch size or warm up Strategy. However, in Synthia, even after using the trikes mentioned earlier, the result still did not meet expectations. Hope this helps you. If you have any progress, please feel free to discuss. And I really hope that the author could release the intermediate weights after stage 1.
@Zachzhang97 how did u setup the synthia training as the classes are different for both the datasets(synthia and cityscapes)?
@Zachzhang97 were you able to reproduce the FDA with Single Scale performance? For the VGG-16 backbone, the FDA with Single Scale numbers are very low. Below is the performance for LB-0.01.
@YanchaoYang could you release the intermediate weights for single scale performance, that would be of great help. Or maybe point me about any tricks you used / tell us what was FDA with single scale performance on VGG backbone that would be reallly helpful :D ===>road: 44.96 ===>sidewalk: 19.51 ===>building: 56.82 ===>wall: 6.53 ===>fence: 6.94 ===>pole: 9.01 ===>light: 5.36 ===>sign: 4.42 ===>vegetation: 64.69 ===>terrain: 8.23 ===>sky: 51.78 ===>person: 32.89 ===>rider: 1.32 ===>car: 47.04 ===>truck: 3.21 ===>bus: 1.79 ===>train: 0.01 ===>motocycle: 4.98 ===>bicycle: 0.43 ===> mIoU19: 19.47 ===> mIoU16: 22.4 ===> mIoU13: 25.85
Unable to reproduce good results of Synthia to Cityscapes. In LB=0.01, the miou is 44.09 before ssl. In LB=0.05, the miou is 43.3 before ssl. Can you please provide the hyperparameter settings on the Synthia dataset or your intermediate weight?Thanks a lot.