A tool for estimating the future energy use, carbon emissions, and capital and operating cost impacts of energy efficiency and demand flexibility technologies in the U.S. residential and commercial building sectors.
User would run workflow pointing to directory yml files and scout would run ecm_prep and run.py on each of those config files and store in distinct results folders at ./results/<custom_config_name>/....
Features:
One command to run n yml files python scout/run_batch.py --yaml_dir <./my_ymls> which would call ./scout/run_workflow.py alternatively bring in a "batch" capability to run_workflow.py
Lump yml files that have common ecm_prep arguments excluding the ecm downselect fields, run as one single ecm_prep.py call. This will create a superset of ECMs in run_setup.json
The above would require changes to run.py to reference the yml ecm or some yml-specific list of ecms
Maybe: new method for extracting list of ecms (regex + ecm list arg), call that in the batch script and pass to run.py or write to run_setup.json
What happens if some ymls files hit an exception but keep running (https://github.com/trynthink/scout/issues/255) once we get to run.py? May want to have some approach to storing exceptions and the ability to skip over those that fail, something like a run_setup_batch.json that stores active/inactive ecms plus "exceptions".
Allow batch runs using > 1 yaml config file. This will likely depend on https://github.com/trynthink/scout/issues/365
User would run workflow pointing to directory yml files and scout would run ecm_prep and run.py on each of those config files and store in distinct results folders at
./results/<custom_config_name>/....
Features:
python scout/run_batch.py --yaml_dir <./my_ymls>
which would call./scout/run_workflow.py
alternatively bring in a "batch" capability to run_workflow.pyrun_setup_batch.json
that stores active/inactive ecms plus "exceptions".First pass: