Closed sbugosen closed 7 months ago
To run the experiment for the high reliability NN (but higher objective error)
python run_nn_sweep.py --fname=nn_experiment_high_rel.csv --surrogate_fname=keras_surrogate_high_rel
To run the experiment for the low reliability NN (with low objective error)
python run_nn_sweep.py --fname=nn_experiment_low_rel.csv --surrogate_fname=keras_surrogate_low_rel
To validate results:
python validate_sweep_results.py data/nn_experiment_high_rel.csv --baseline-fpath=data/implicit_experiment.csv
python validate_sweep_results.py data/nn_experiment_low_rel.csv --baseline-fpath=data/implicit_experiment.csv
@Robbybp
python run_nn_sweep.py --fname=nn_experiment.csv
python validate_sweep_results.py data/nn_experiment.csv --baseline-fpath=data/implicit_experiment.csv
or fullspace_experiment.csv as the baseline
python analyze_results.py data/nn_experiment.csv --validation-fpath=data/nn_experiment-validation.csv --feastol=1e-7
python plot_convergence.py data/nn_experiment.csv --validation-fpath=data/nn_experiment-validation.csv --feastol=1e-7
With feastol = 1e-7, it will paint black some instances that had an optimal termination condition in run_nn_sweep.py
, but too high of an infeasibility in validate_sweep_results.py
Pending for me:
validate_sweep_results.py
to only validate optimal termination instances.nn_tuning_training.py
.R2_values.csv
@Robbybp - Question
To validate the results, we are using either full space_experiment.csv or implicit_experiment.csv as the baselines.
What if I run the full space optimization but with a very good initialization, and use that dataset (which presumably will contain 64 optimal instances) as a baseline? Let me know if you want me to do this.
What if I run the full space optimization but with a very good initialization, and use that dataset (which presumably will contain 64 optimal instances) as a baseline? Let me know if you want me to do this.
This is a good idea. Only note that if we want to include these results in the paper, we will need to explain the two different initialization strategies.
What if I run the full space optimization but with a very good initialization, and use that dataset (which presumably will contain 64 optimal instances) as a baseline? Let me know if you want me to do this.
This is a good idea. Only note that if we want to include these results in the paper, we will need to explain the two different initialization strategies.
I just pushed a dataset named fullspace-optimal64-baseline.csv
to the data directory of this branch. The initialization I used was the following:
Finally, results provided by item 2 are passed to the make_optimization_model function.
This dataset would only be used in the validate_sweep_results.py
script.
@Robbybp
Just updated this branch with the corrected validate_sweep_results.py
and run_nn_sweep.py
files.
New datasets are shown in the data
directory.
Experiments can be ran as:
python run_nn_sweep.py --fname=nn_experiment.csv
python validate_sweep_results.py data/nn_experiment.csv --baseline-fpath=data/fullspace-optimal64-baseline.csv
python analyze_results.py data/nn_experiment.csv --validation-fpath=data/nn_experiment-validation.csv --feastol=1e-7
python plot_convergence.py data/nn_experiment.csv --validation-fpath=data/nn_experiment-validation.csv --feastol=1e-7
Pending:
nn_tuning_training.py
.R2_values.csv
When I run python nn_flowsheet.py
, I get the following error:
ValueError: File format not supported: filepath=/Users/rbparker/research/collab/surrogate-vs-implicit/svi/auto_thermal_reformer/results/keras_surrogate_high_rel. Keras 3 only supports V3 `.keras` files and legacy H5 format files (`.h5` extension). Note that the legacy SavedModel format is not supported by `load_model()` in Keras 3. In order to reload a TensorFlow SavedModel as an inference-only layer in Keras 3, use `keras.layers.TFSMLayer(/Users/rbparker/research/collab/surrogate-vs-implicit/svi/auto_thermal_reformer/results/keras_surrogate_high_rel, call_endpoint='serving_default')` (note that your `call_endpoint` might have a different name).
I will look into this, but posting here in case it rings a bell for you.
pip list | grep tensorflow
gives me:
tensorflow 2.16.1
tensorflow-io-gcs-filesystem 0.36.0
My versions are:
tensorflow 2.15.0
tensorflow-estimator 2.15.0
tensorflow-io-gcs-filesystem 0.34.0
tensorflow-macos 2.15.0
keras 2.15.0
I'd like to merge this PR, but there are still too many CSV files for my taste. I've left comments on those I think should be removed. Let me know if you disagree.
I haven't reproduced the NN tuning/training processes yet, but I've produced all the results I need to with the resulting NN, and will make any tuning/training changes separately.
Thanks for the refactor, this was easy to follow and use.
Just removed those .csv files.
Testing out different NN to see if results improve. Will push soon.