Hello @leeyap this PR makes some restructuring and clarity/robustness changes to the code; see #62 . More importantly, though, it adds a table summarizing the run failures, and a figure showing the CMIP CO2 data, the Hector runs, and the Mauna Loa observations:
run_fails
N
Percent
Cutoff
FALSE
42
60
0.33
TRUE
28
40
0.33
Please tell me if I'm making a mistake here, but from what I can see: we're experiencing lots of 'failures' because we're comparing against CMIP6 historical CO2, but those models are too high relative to observations.
Observational CO2 is what we'd like to benchmark against, and the figure above makes it clear that CMIP6 is not a good metric by itself; those models are biased high.
But, the observations are single points with no error. What is 'too far' away from them?
I'd suggest using the CMIP6 standard deviation for this: a Hector atmospheric CO2 value in a given year is declared 'fail' if it's outside obs ± cmip6_sd. That's about ±15 or so, although it changes over time.
Does this make sense to you? Are next steps clear?
Hello @leeyap this PR makes some restructuring and clarity/robustness changes to the code; see #62 . More importantly, though, it adds a table summarizing the run failures, and a figure showing the CMIP CO2 data, the Hector runs, and the Mauna Loa observations:
Please tell me if I'm making a mistake here, but from what I can see: we're experiencing lots of 'failures' because we're comparing against CMIP6 historical CO2, but those models are too high relative to observations.