allenai / allenact

An open source framework for research in Embodied-AI from AI2.
https://www.allenact.org
Other
318 stars 52 forks source link

Option to allow changing / overriding experiment config parameters from command line arguments #201

Closed prithv1 closed 3 years ago

prithv1 commented 4 years ago

Problem

In addition to worker_devices, it'd be great if there was an option to modify / override other parameters in the experiment configs without rewriting a new experiment config. Handling multiple configs for changes in learning rate, number of rollout steps, new datasets (new episode configs) or sensor specifications can become unweildy while running multiple controlled sweeps for instance.

Desired solution

Override experiment config parameters if specified in command line arguments and ignore if not specified. Monkey patching the experiment configs in the load_config() function in main.py is one possible solution but not sure if that's the right way to go about this.

jordis-ai2 commented 4 years ago

I'll take a look at a previous implementation using Gin Config in Luca's branch and see how much I can bring back from there...

Lucaweihs commented 4 years ago

Thanks for the suggestion @prithv1. I definitely agree, we had a method for doing this with gin-config but this solution no longer works with the distributed engine.

Do you have any preferences on how these command line arguments should be given to the experiment config? E.g. should def training_pipeline take something like a cmd_line_kwargs: Dict[str, Any] parameter which provides any overrides?

Lucaweihs commented 4 years ago

@jordis-ai2 before doing any implementation we should probably have a discussion on what's the best way to do this. I liked the gin-config style strategy as it was very flexible but it was also quite "magical" and requires people to learn yet another set of tools. It might be worth minorly limiting flexibility for the sake of ease.

prithv1 commented 4 years ago

Do you have any preferences on how these command line arguments should be given to the experiment config? E.g. should def training_pipeline take something like a cmd_line_kwargs: Dict[str, Any] parameter which provides any overrides?

I think a specification based on command line arguments like this should be fine. I personally have not used the gin-config setup but from a quick glance that also seems like something that should do the job. At the very least, I think offering flexibility over the worker_devices (saw that it was already noted as an issue), datasets, sensor-arguments, reward configs and some key arguments for the training pipeline (learning rate, rollout steps, etc.) via a setup like this would definitely be useful.

jordis-ai2 commented 4 years ago

Another option is to remove the current code to automatically start distributed training on a single machine and make the main process be the trainer, thus going back to the setup we had before the distributed redesign. I'm open to do that too, if that's the best solution (it would probably also be the most homogeneous if we ever plan to provide multi-node training).

Lucaweihs commented 3 years ago

This is now implemented by (the merged) PR #279 .