Previous experiments had errors due to incorrectly reading or setting hyperparameter values (e.g., typos). In addition, per Abel's recommendation, we should sort experiments by __date__/__experiment_id__/__run_id__/.
Here is what we plan to do to ensure hyperparameter reading is more robust:
[x] removing default hyperparameter values and dictionaries and explicitly passing variables
[x] manually testing reading and setting hyperparameters within run.py
[ ] adding some testing to gym_examples/battery_env.py to ensure we properly set the hyperparameters
For organizing experiments:
[x] Save experiments into the folder structure __date__/__experiment_id__/__run_id__/ and do the same with settings
[x] Automate create settings (to remove human errors)
[x] Create script files (by date and experiment_id) that run all experiments. Allow breaking into sub-experiments if we want to utilize multiple computers
[x] Create work, validate and full mode, which allows one to run experiments for a short period to ensure the code is correct, to run one seed to get a "feel" of the algorithm's performance, and then the remaining 9 seeds to get a more statistically sound representation
Previous experiments had errors due to incorrectly reading or setting hyperparameter values (e.g., typos). In addition, per Abel's recommendation, we should sort experiments by
__date__/__experiment_id__/__run_id__/
.Here is what we plan to do to ensure hyperparameter reading is more robust:
run.py
gym_examples/battery_env.py
to ensure we properly set the hyperparametersFor organizing experiments:
__date__/__experiment_id__/__run_id__/
and do the same with settingswork
,validate
andfull
mode, which allows one to run experiments for a short period to ensure the code is correct, to run one seed to get a "feel" of the algorithm's performance, and then the remaining 9 seeds to get a more statistically sound representation