Closed wj-zhang closed 4 years ago
Does it really matter of ~0.1 difference?
Does it really matter of ~0.1 difference?
I just want to confirm that the released pre-trained models are correct and the evaluation I did is right. Plus, can I report the test results I got based on the released models in my manuscript?
I think the model I upload is correct. As i repeat the same experiments for several times, the one I upload to github might be a light different to the one I use to report in the paper. I still suggest you to use the results in the paper if there is only ~0.1 difference. Oh, btw, the results can also be influenced by order of softmax and upsample in the evaluation code. You can make a simple try by change the order of them. I'm sure it will give you a slightly different result.
I think the model I upload is correct. As i repeat the same experiments for several times, the one I upload to github might be a light different to the one I use to report in the paper. I still suggest you to use the results in the paper if there is only ~0.1 difference. Oh, btw, the results can also be influenced by order of softmax and upsample in the evaluation code. You can make a simple try by change the order of them. I'm sure it will give you a slightly different result.
Got it. Thank you so much for your reply and suggestions!
Hi, I am sorry to disturb you again. I was trying to evaluate the pre-trained models which are provided by this project but I met some difficulties. Can you give me some suggestions? Thanks in advance!
You provided the pre-trained models in the README file, they are GTA5_deeplab, GTA5_VGG, SYNTHIA_deeplab, and SYNTHIA_VGG. In my understanding, I can get the same results with the paper by running evaluation.py on the test dataset. However, the results I got are as follows
GTA5→Cityscapes, DeepLab (48.5 in paper), the result is exactly the same as the paper
GTA5→Cityscapes, VGG (41.3 in paper), the result is slightly lower than the paper
SYNTHIA→Cityscapes, DeepLab (51.4 in paper), the result is slightly lower than the paper
SYNTHIA→Cityscapes, VGG (39.0 in paper), the result is slightly lower than the paper