Closed lrennels closed 1 month ago
Hi, thank you for the question! For outputs that are in the save_list
I recommend looking into the model and checking the units in documentation. In the save_list
the output is determined by a Tuple
of (:Component, :Parameter/Variable)
. Some units may be harder to find than others due to nested model structure, for example the two you list are actually best documented in MimiFAIRv1_6_2
here which is pulled in directly. Just to be explicit about those two:
(:temperature, :T)
is here and is in units of "# Global surface temperature anomaly (K)" (from comments) -- this is from preindustrial -- note some components will normalize temperature to a specific normalization period(:co2_cycle, :co2)
is here and is in units of "#Total atmospheric carbon dioxide concentrations (ppm)" (from comments)In terms of the disaggregated values I believe that the marginal damages are in 2005 USD, which year were you pulling 1.16E-03 from? I'm happy to rerun your code locally and take a look though!
Thank you for the repost and your answers!
Once I had the output, I took the summary statistics of all draws across all times. (In python, I used the describe() function).
Here is a link to the output from the model run Files
@museaoide that is across all draws, all times, and all regions? I'm not sure a single statistics is going to be enough to gut-check these results. There is considerable uncertainty here, and also expected trends of increases over time and higher magnitude in certain regions compared to others.
I would dig a little deeper into the dimensions over time and space, using a smaller number of trials or a subset of your trials will make such exploration more manageable. You can take a look at that, or even try to roughly recalculate the partial SCCs (exported with compute_sectoral_values = true
) using the undercounted marginal damages and the socioeconomic variables needed to recompute the Ramsey factor.
In my own checks of my own running of the code and my own files (not your files) I don't see anything off with the units being USD 2005, but let me know if they still seem off after further investigation of course.
This is across all draws, all times for the United States. Not only did the median seem quite small, but the maximum to us as well 2.1E3. This was for a 1GT pulse (in context of 2,400 emitted since 1850). But it may just be that our sense of magnitudes is not where it should be.
Thank you for your independent checks, and the suggestion to recalculate partial SCCs!
I know that recalculation can be difficult, I was just musing about ways we could double check things beyond the single number. Ah, I can add this to the readme, but marginal damages are always renormalized to be in units of dollars per ton here, so your units are 2005 USD per ton, not per GT.
Got it! So to get the marginal damages for a 1GT pluse, I would have to multiply all marginal damages by E9.
Thank you, lrennels!
Also, I would just like to check that $ damages in the baseline run have not been normalized.
Got it! So to get the marginal damages for a 1GT pluse, I would have to multiply all marginal damages by E9.
Thank you, lrennels!
Numerically yes, but these models are meant to represent the impact due to marginal changes in emissions, and the results generally cannot be interpreted to hold for any magnitude of changes. There is some literature on this, but I would caution how far you extrapolate (at an extreme, for example, it is not appropriate to use SCC x all historic emissions to calculate total damages in history). It's important to be careful there.
In GIVE the pulse size is what is used to pulse the baseline emissions, which then feed in to derive the temperature impulse response from the FaIR climate model, and onward. For the SCC the general wisdom is to try to use a fairly small pulse while retaining numeric stability within the climate model, so as to stay as true to the "dollars per ton" metric that is the final result. I believe Rennert et al. (2022) uses 1e-4 for some step-function stability reasons in the Monte Carlo, though 1Gt is likely fine and I see remains the default in the model code.
Also, I would just like to check that $ damages in the baseline run have not been normalized.
Correct.
This was super helpful. Thanks, again!
I'm glad! I'll close this issue, but feel free to ask more questions and tag me @lrennels so it comes to my attention.
Reposting from Mimi forum since the question is
MimiGIVE
specificWhat are the exact data definitions and units for MimiGIVE default, and disaggregated outputs (e.g., damages, marginal damages, co2, temperature)?
I am asking because when I ran 10,000 draws of the model with a 1Gt emissions pulse, the marginal damages for the US seem implausibly small under the data definition given in the output README.
In particular, median of the agricultural marginal damages are 1.16E-03; the README seems to suggest that the units are in 2005 USD $.
The code I used to get my draws was the following: