This PR adds files used to perform single-objective optimization for the once-through transition to advanced reactors. The optimization is performed by coupling Cyclus with Dakota. New files include:
Dakota input files in input/haleu/optimization/once-through, each file is titled based on the metric to minimize
Driver files to create cyclus input files and return results to Dakota for each input file, located in input/haleu/optimization/once-through, titled based on problem the driver corresponds to
Cyclus input file template, used in both problems, located in input/haleu/optimization/once-through
input/haleu/optimization/once-through/soga_tuning/Tuning_results.ipynb: notebook to look at the performance of each set of hyperparameters considered for tuning the hyperparameters
input/haleu/optimization/once-through/soga_tuning/*.csv: CSV files to track the hyperparameters and model parameters for the fine and coarse tuning sets
input/haleu/optimization/once-through.soga_tuning/run_multiple_dakota.sh to easily loop over tuning cases to run
input/haleu/optimization/once-through/soga_tuning/soga_tuning_driver.py: Driver file to create Cyclus input files and return information to Dakota
input/haleu/optimization/once-through/soga_tuning/soga_tuning_input.xml.in: Cyclus input file template for tuning
input/haleu/optimization/once-through/soga_tuning/soga_tuning_template.in: template for Dakota input files, allowing the hyperparameters of interest to vary in each input
input/haleu/optimization/once-through/soga_tuning/tuning.py: python script to loop over some hyperparameter values and randomly select others, apply these to the dakota input file template, and create csv file to keep track of the results
scripts/output_metrics.py: change run_cyclus function to only run cyclus if the given output sqlite file doesn't exist. This helps to improve the parallelization of the Dakota runs by preventing multiple cyclus runs from trying to write to the same output file at the same time, or to prevent the output file from being deleted while another run is trying to write to it. Each cyclus run is deterministic, there is no randomness, so each time you run the input file with the same parameters, you get the same resutls. Therefore, reusing the results from previous runs of the input file do not affect the metrics returned to Dakota.
This PR adds files used to perform single-objective optimization for the once-through transition to advanced reactors. The optimization is performed by coupling Cyclus with Dakota. New files include:
input/haleu/optimization/once-through
, each file is titled based on the metric to minimizeinput/haleu/optimization/once-through
, titled based on problem the driver corresponds toinput/haleu/optimization/once-through
input/haleu/optimization/once-through/soga_tuning/Tuning_results.ipynb
: notebook to look at the performance of each set of hyperparameters considered for tuning the hyperparametersinput/haleu/optimization/once-through/soga_tuning/*.csv
: CSV files to track the hyperparameters and model parameters for the fine and coarse tuning setsinput/haleu/optimization/once-through.soga_tuning/run_multiple_dakota.sh
to easily loop over tuning cases to runinput/haleu/optimization/once-through/soga_tuning/soga_tuning_driver.py
: Driver file to create Cyclus input files and return information to Dakotainput/haleu/optimization/once-through/soga_tuning/soga_tuning_input.xml.in
: Cyclus input file template for tuninginput/haleu/optimization/once-through/soga_tuning/soga_tuning_template.in
: template for Dakota input files, allowing the hyperparameters of interest to vary in each inputinput/haleu/optimization/once-through/soga_tuning/tuning.py
: python script to loop over some hyperparameter values and randomly select others, apply these to the dakota input file template, and create csv file to keep track of the resultsscripts/output_metrics.py
: change run_cyclus function to only run cyclus if the given output sqlite file doesn't exist. This helps to improve the parallelization of the Dakota runs by preventing multiple cyclus runs from trying to write to the same output file at the same time, or to prevent the output file from being deleted while another run is trying to write to it. Each cyclus run is deterministic, there is no randomness, so each time you run the input file with the same parameters, you get the same resutls. Therefore, reusing the results from previous runs of the input file do not affect the metrics returned to Dakota.