Closed hurricane2018 closed 6 years ago
I mean, If we train 10 epoch, we can get a precision. If we train 20 epoches, we can get another result. How did you solve this problem ?
how many epoches did you use to evaluate a dataset ?
Hi! To get exactly same results from run to run, please, set up torch.backends.cudnn.deterministic = True, num_workers=0. We've trained for 10 epochs and evaluated results after the last one, also, take a look at the adjust_learning_rate function, which determines the learning rate policy. If you have any other questions, don't hesitate to ask.
Yeah, thank you for your reply.
But I get different results every time. I mean I can get a result this money, but tomorrow a slight different result . I mean every time when I run your code, I can get a different result.
So did you run your program several times, and average them ?
2018-04-16 20:33 GMT-04:00 Anastasiia notifications@github.com:
Hi! To get exactly same results from run to run, please, set up torch.backends.cudnn.deterministic = True, num_workers=0. We've trained for 10 epochs and evaluated results after the last one, also, take a look at the adjust_learning_rate function, which determines the learning rate policy. If you have any other questions, don't hesitate to ask.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/DagnyT/hardnet/issues/10#issuecomment-381792037, or mute the thread https://github.com/notifications/unsubscribe-auth/AifZ-bLQEsds4dmzxA3mWY47CiY_fLBXks5tpThFgaJpZM4TVQHx .
We didn't average the results during the evaluation. To get deterministic results from run to run, just add torch.backends.cudnn.deterministic = True to your code (check the last commit, I've added setup to get exactly same results from run to run).
The test accuracy is always change. Did you get the average precision of the test data ?