About reproducibility, we examined multiple times about our reproducibility, and found that setting same random seed for libraries and same versions is not enough when platform differs.
At some environments, it is reproduced perfectly but some doesn't.
And unfortunately, our training has considerable variance after pruning ratio 50%.
To show our result is reproducible, we'd like to verify it in following two ways:
1) We wrote a link in our README.md which contains checkpoints for all our steps.
2) We can provide ssh access to our server environment if it is needed for challenge organization.
+) Additionally, our pytorch version is 1.1.0.
We already checked out that different version can make different result with the same seed.
@micronet-challenge-submissions
Dear Organizers, We just finalized our works!
About reproducibility, we examined multiple times about our reproducibility, and found that setting same random seed for libraries and same versions is not enough when platform differs.
At some environments, it is reproduced perfectly but some doesn't. And unfortunately, our training has considerable variance after pruning ratio 50%.
To show our result is reproducible, we'd like to verify it in following two ways: 1) We wrote a link in our README.md which contains checkpoints for all our steps. 2) We can provide ssh access to our server environment if it is needed for challenge organization.
+) Additionally, our pytorch version is 1.1.0. We already checked out that different version can make different result with the same seed.
Thank you! TEAM OSI-AI