JGCRI / trackingC

Where does fossil fuel C end up, and how does that change with changing parameters?
MIT License
3 stars 0 forks source link

Clean up code, visualize data #63

Closed bpbond closed 2 years ago

bpbond commented 2 years ago

Hello @leeyap this PR makes some restructuring and clarity/robustness changes to the code; see #62 . More importantly, though, it adds a table summarizing the run failures, and a figure showing the CMIP CO2 data, the Hector runs, and the Mauna Loa observations:

run_fails N Percent Cutoff
FALSE 42 60 0.33
TRUE 28 40 0.33

ml

Please tell me if I'm making a mistake here, but from what I can see: we're experiencing lots of 'failures' because we're comparing against CMIP6 historical CO2, but those models are too high relative to observations.

bpbond commented 2 years ago

Suggested next steps:

  1. Observational CO2 is what we'd like to benchmark against, and the figure above makes it clear that CMIP6 is not a good metric by itself; those models are biased high.
  2. But, the observations are single points with no error. What is 'too far' away from them?
  3. I'd suggest using the CMIP6 standard deviation for this: a Hector atmospheric CO2 value in a given year is declared 'fail' if it's outside obs ± cmip6_sd. That's about ±15 or so, although it changes over time.

Does this make sense to you? Are next steps clear?