To run the train.py examples, I found I needed to change the calls that passed cfg.seed to explicitly calculate the uniform distribution randomly generated seed. Perhaps this was due to a failure on my part to properly align versions in my conda environment since I found I had to also install from github source the omegaconf module.
Here is the change I made:
diff --git a/pl_runner.py b/pl_runner.py
index 2979a89..5c84a63 100644
--- a/pl_runner.py
+++ b/pl_runner.py
@@ -3,10 +3,12 @@ import pytorch_lightning as pl
def pl_train(cfg, pl_model_class):
+ from random import randint
if cfg.seed is not None:
- torch.manual_seed(cfg.seed)
+ seed=randint(cfg.seed[1],cfg.seed[2])
+ torch.manual_seed(seed)
if torch.cuda.is_available():
- torch.cuda.manual_seed(cfg.seed)
+ torch.cuda.manual_seed(seed)
model = pl_model_class(cfg.model, cfg.dataset, cfg.train)
if 'pl' in cfg and 'profile' in cfg.pl and cfg.pl.profile:
# profiler=pl.profiler.AdvancedProfiler(output_filename=cfg.train.profiler),
diff --git a/requirements.txt b/requirements.txt
index f06f0ed..2882772 100644
--- a/requirements.txt
+++ b/requirements.txt
To run the
train.py
examples, I found I needed to change the calls that passedcfg.seed
to explicitly calculate the uniform distribution randomly generated seed. Perhaps this was due to a failure on my part to properly align versions in myconda
environment since I found I had to also install from github source theomegaconf
module.Here is the change I made: