I have been thinking on how to propagate parameter uncertainty into the output maps. After carefully thinking about it, I suggest the following workflow (already in progress, see 9c5553daf5b516bf6b674e1ed2d39260c6dbb9d7) for this enhancement (crossed boxes indicate they have been already implemented in thermal_suitability_bounds function)
[x] 1. Simulate 1,000 thermal performance curves (TPCs), assuming a normal distribution with mean == parameter estimate and sd = parameter std. error and then doing the combinatory grid of all parameters of the curve. [thermal_suitability_bounds() function]
[x] 2. Draw the 1,000 TPCs as uncertainty bands in the plots .
[x] 3. Obtain a pair of thermal boundaries for each simulated TPC rather than for the central curve as before.
[x] 4. Add an argument uncertainty_map = TRUE/FALSE to map_risk() function. If TRUE, 1,000 rasters would be generated with a for loop (1 per simulated TPC), thus inheriting parameter uncertainty and normal distribution. Then, the function might automatically calculate the mean across the 1,000 rasters to obtain a mean estimate raster (i.e., the equivalent to the original function) and the standard deviation across these 1,000 rasters to obtain an uncertainty raster.
Some questions and possible tasks to discuss at mid-October about this:
Wheter to use a log-normal or a normal distribution for uncertainty simulations.
Pros & Cons for the map suggested in Step 4 of this issue in map_risk() function.
See if plot_devmodels() should incorporate these uncertainty bands.
Discuss the easiest approach (argument vs. default, vs. extra function) for these uncertainty maps.
Update documentations
PD: I have been working in the main as there was a lot of work to do with that. I will continue working on a separate branch.
PD2: This uncertainty implementation is my personal suggestion on how I address these kind of questions. I am completely open to discuss your possibilities and to discuss the limitations of the approach. The first three steps are already implemented, but it can be reversed if you don't think it is a good idea to add them to the previous versions of the function.
Hi!
I have been thinking on how to propagate parameter uncertainty into the output maps. After carefully thinking about it, I suggest the following workflow (already in progress, see 9c5553daf5b516bf6b674e1ed2d39260c6dbb9d7) for this enhancement (crossed boxes indicate they have been already implemented in
thermal_suitability_bounds
function)mean == parameter estimate
andsd = parameter std. error
and then doing the combinatory grid of all parameters of the curve. [thermal_suitability_bounds()
function]uncertainty_map = TRUE/FALSE
tomap_risk()
function. IfTRUE
, 1,000 rasters would be generated with afor
loop (1 per simulated TPC), thus inheriting parameter uncertainty and normal distribution. Then, the function might automatically calculate the mean across the 1,000 rasters to obtain a mean estimate raster (i.e., the equivalent to the original function) and the standard deviation across these 1,000 rasters to obtain an uncertainty raster.Some questions and possible tasks to discuss at mid-October about this:
map_risk()
function.plot_devmodels()
should incorporate these uncertainty bands.PD: I have been working in the main as there was a lot of work to do with that. I will continue working on a separate branch. PD2: This uncertainty implementation is my personal suggestion on how I address these kind of questions. I am completely open to discuss your possibilities and to discuss the limitations of the approach. The first three steps are already implemented, but it can be reversed if you don't think it is a good idea to add them to the previous versions of the function.
Thank you!