Closed JeremyLinky closed 3 years ago
Hi, unfortunately, I was not able to get complete replicability by setting the random seeds. There could be multiple reasons for this either within habitat or our own code. However, in experiments we ran post acceptance, we found that the trends for the different models did not change when we trained and evaluated with 3 random seeds.
Thanks for the reply!!
Hi, I noticed that you have used these following codes to generate a random number of a specific seed so that the result is deterministic,
random.seed(config.TASK_CONFIG.SEED)
np.random.seed(config.TASK_CONFIG.SEED)
torch.manual_seed(config.TASK_CONFIG.SEED)
But the truth is , when I use the same experimental setup to run for two times, the results of the same model (eg. ckpt.22.pth )are not the same or even very different. Could you tell me how to ensure that when using the same experimental setup, the model results after two runs are consistent? Thanks a lot!