Closed woshilaixuexide closed 6 years ago
This post might help: https://www.google.com/amp/s/www.researchgate.net/post/Whenever_i_run_my_neural_network_I_get_different_result/amp
If you don't set a seed for the random number generator, it will have a new seed every time you train. If you want numerically repeatable results (useful for debugging), then it is necessary to select a seed.
DNNs tend to be pretty sensitive to their parameter initialization, and some seeds just end up not converging for some models and datasets. It's a common issue in machine learning in general.
In practice, people often train a few models with different seeds and find one that converges well.
Hi Can you explain why do we need the random seed? I noticed the random seed for ImageNet classification is set to 34. I also trained a model for face verification, without random seed, it sometimes doesn't converge. But when I set the random seed as 1000, it always converges. Can you explain how to determine random seed when faced with different training tasks?