facebookresearch / OccupancyAnticipation

This repository contains code for our publication "Occupancy Anticipation for Efficient Exploration and Navigation" in ECCV 2020.
MIT License
78 stars 26 forks source link

How to ensure that when using the same experimental setup, the model results after two runs are consistent? #26

Closed JeremyLinky closed 3 years ago

JeremyLinky commented 3 years ago

Hi, I noticed that you have used these following codes to generate a random number of a specific seed so that the result is deterministic, random.seed(config.TASK_CONFIG.SEED) np.random.seed(config.TASK_CONFIG.SEED) torch.manual_seed(config.TASK_CONFIG.SEED) But the truth is , when I use the same experimental setup to run for two times, the results of the same model (eg. ckpt.22.pth )are not the same or even very different. Could you tell me how to ensure that when using the same experimental setup, the model results after two runs are consistent? Thanks a lot!

srama2512 commented 3 years ago

Hi, unfortunately, I was not able to get complete replicability by setting the random seeds. There could be multiple reasons for this either within habitat or our own code. However, in experiments we ran post acceptance, we found that the trends for the different models did not change when we trained and evaluated with 3 random seeds.

JeremyLinky commented 3 years ago

Thanks for the reply!!