Open max010515 opened 3 years ago
I forgot to mention... but sure a big thanks to anyone who'll help :)
@max010515 Do you mind upload a complete runnable demo?
Hi @Oceania2018, Sorry for the late reply. I have recently done other tests using last version of the project but still cannot get reproducible results from successive model trainings. As requested, I have included a runnable demo in both .Net and Python. For the python demo, using run() function of the .py file allows to check that using seed allows to get similar results from successive trainings. However it doesn't work in .Net and this can be verified with the other demo.
Reproducible_results.zip Reproducible_results_python.zip
Thanks a lot in advance for any help. Let me know if you need more information.
Hi, I'm trying to get reproducible results for similar sequential models. My regression model is the following:
var model = keras.Sequential(); model.add(layers.Dense(64, activation: "relu", input_shape: new TensorShape(8))); model.add(layers.Dense(64, activation: "relu")); model.add(layers.Dense(64, activation: "relu")); model.add(layers.Dense(64, activation: "relu")); model.add(layers.Dense(1));
I have tried to put a seed for both numpy and tensorflow as following:
np.random.seed(1); tf.set_random_seed(1);
Each time I perform a model training I get different statistics results. It seems that the initilization of tensors is different for each run.
I have tried similar code in Python and it works fine when applying seed like below :
from numpy.random import seed seed(1) import tensorflow tensorflow.random.set_seed(2)
For each run, I get same results which is I expect for my need.
Is there anything I've missed in my model and/or the initialization of libraries? Has someone already succeed in getting reproducible results?