We cannot reproduce the results with DeepLab-V3+ by following the default configuration.
We have tried using ImageNet pre-trained ResNet-101 as the backbone to implement the DeepLab-V3+ with the default configuration provided in this repo. The S4GAN + MLMT with full data can reach 74.9 miou but it only achieves 68.5 miou on 1/8 data that is far behind the results reported in the paper. Could you elaborate more details on the training configuration of DeepLab-V3+ (e.g., learning rate, random seed, backbone, num_steps)?
Thanks for your code.
We cannot reproduce the results with DeepLab-V3+ by following the default configuration.
We have tried using ImageNet pre-trained ResNet-101 as the backbone to implement the DeepLab-V3+ with the default configuration provided in this repo. The S4GAN + MLMT with full data can reach 74.9 miou but it only achieves 68.5 miou on 1/8 data that is far behind the results reported in the paper. Could you elaborate more details on the training configuration of DeepLab-V3+ (e.g., learning rate, random seed, backbone, num_steps)?
Thank you.