BIT-DA / SePiCo

[TPAMI 2023 ESI Highly Cited Paper] SePiCo: Semantic-Guided Pixel Contrast for Domain Adaptive Semantic Segmentation https://arxiv.org/abs/2204.08808
https://arxiv.org/abs/2204.08808
Other
112 stars 7 forks source link

about performance #11

Closed Renp1ngs closed 1 year ago

Renp1ngs commented 1 year ago

Thanks for your great work, but a minor issue. Are the results reported in your paper the average of the effects of multiple seeds? If not, is there an average for experiment 4 (gta->cityscapes and synthia->cityscapes based on SegFormer) that I can refer to? This result is preferably a fair comparison with DAFormer, that is, the test number is 1/4 of the number of times you publish code and the training resolution is 512x512. I hope to cite this result in my paper for comparison.

BinhuiXie commented 1 year ago

Hi @Renp1ngs

Thanks for your interest in the paper. The results are the average of three random seeds (42, 76, and 2022) and the training resolution is 640x640.

Pardon. I didn't catch the question about the test number is 1/4 of the number of times the code.

Renp1ngs commented 1 year ago

Thanks for your quick reply.

I didn't catch the question about the test number is 1/4 of the number of times the code.

In DAFormer config:

evaluation=dict(
    interval=4000,
    metric="mIoU" )

But in SePiCo config:

evaluation=dict(
    interval=1000,
    metric="mIoU" )

That means your model is tested more times. This doesn't seem like a fair comparison .

Can you provide the training log of your experiment 4 (syn->cs and gta->cs)? I want to observe the loss and performance of the whole training process.

BinhuiXie commented 1 year ago

I see.

However, the interval does not matter, since we just keep the last checkpoint (last.pth) for evaluation :rofl:

I may need to find the training log, and I will contact you if there is any news.

Renp1ngs commented 1 year ago

Thanks! Have a good day!