Closed ygjwd12345 closed 3 years ago
Hi, do you follow the same command suggested in the readme.md? If yes,
Generally, if you can reproduce the results of SHOT for closed-set UDA, it seems that you can obtain similar results for partial-set and open-set results with ours.
Best,
Tim
Hi, do you follow the same command suggested in the readme.md? If yes,
- you need to check the consistency of the versions of different packages e.g., torch, torchvision with those we used
- you need to check the consistency of the source-only results between yours and ours in the results.md
Generally, if you can reproduce the results of SHOT for closed-set UDA, it seems that you can obtain similar results for partial-set and open-set results with ours.
Best,
Tim
Yes, I follow the command in readme.md. Meanwhile, the close-set results are the same with yours. However, the open-set is terrible.
Hi, do you follow the same command suggested in the readme.md? If yes,
- you need to check the consistency of the versions of different packages e.g., torch, torchvision with those we used
- you need to check the consistency of the source-only results between yours and ours in the results.md
Generally, if you can reproduce the results of SHOT for closed-set UDA, it seems that you can obtain similar results for partial-set and open-set results with ours. Best, Tim
Yes, I follow the command in readme.md. Meanwhile, the close-set results are the same with yours. However, the open-set is terrible.
Hi, it seems that the variance you reported for some tasks like A-R is quite large. I do not know what is wrong with your results. As far as I know, our results could be re-produced by other persons in different machines. Maybe you should check the versions again? and your src-only results also similar with ours in the results.md??
I try all seeds ,but there is a big gap between my results and the paper's results. Is there any other trick?