Closed wangyunnan closed 1 year ago
In Fig. 1c, we want to show the progress of UDA over time to give an impression of how much HRDA improves the SOTA when combined with the best UDA method "DAFormer". I agree that a comparison of older UDA methods such as DACS with a DeepLabV2 network and HRDA with a DAFormer network is not entirely fair. For that reason, we will add a legend to indicate the network architecture to make this clear.
Still, in Tab. 2, we have fair comparisons with different UDA methods, showing that HRDA improves over all of them by a significant margin. In particular, I want to highlight that HRDA improves the performance of DACS with a DeepLabV2 network by +5.5 mIoU and achieves 59.4 mIoU, which is still significantly better than ProDA, which achieves "only" 57.5 mIoU. HRDA with a DeepLabV2 network can even achieve about 63 mIoU when using the training strategies proposed in the DAFormer paper (without using the DAFormer network architecture). Further details will follow in an update on arxiv.
We have updated Fig. 1c to differentiate the network architectures and additionally provide results of HRDA with DeepLabV2 in Fig. 1c and Tab. 2. The updated paper is now available on arXiv.
In Figure 1 (c), you compare your method to ProDA and SAC, and your method is not based on DeeplabV2. Is this really a meaningful fair comparison.
In Table 2, we can see that the mIoU based on DeeplabV2 only reaches 59.4, which is an ordinary precision.