Open Making24 opened 3 years ago
Hi,
Similar to previous UDG works, we select a model for a particular testing dataset (e.g. Cityscapes). And following previous UDA works, we directly use the validation set to conduct this selection.
BTW, for Mapillary dataset, the image size is not fixed. Please consider to follow DADA (DADA: Depth-aware Domain Adaptation in Semantic Segmentation) to prepare the data loader.
Thanks.
@jxhuang0508 @Dayan-Guan ,
Thanks for your quick reply!
As your evaluation approach, could you provide all the models for testing? That is, for each setting (source+backbone), there are three models for CityScapes, BDD and Maphillary, respectively.
BTW, could you provide the training code for IBN-NET, which does not violate your company policy?
Thanks!
Hi @jxhuang0508 @Dayan-Guan,
I tested your provided models of both deeplabv2 and FCN, and use the advent code to evaluate the three datasets, I got the same results on cityscapes, but got largely lower results on BDD and Maphillary.
DeeplabV2: B=39.37% M=40.66% C=44.75%
FCN: B=36.40% (reported 41.2) M=38.53% (reported 43.4) C=44.86%
Could you provide the testing code for BDD and Maphillary? As well as also provide the training code?
Thanks!