Closed joewheaton closed 4 years ago
So, every DoD thresholding method (minLOD, propagated error and probabilistic) is based on a spatially continuous (can be spatially constant or varying) estimate of a "propagated error". In the case of minLOD, the user just specifies this value, and in the case of propagated error and/or probabilistic they come from calculating propagated error between the error surfaces used with each DEM in the DoD calculation. The % error calculations are volumes that are simply calculated for all the thresholded cells by multiplying this +/- error thickness (e.g. +/- 10 cm) by the cell area. That results in a volume that notionally needs to be exceeded by the calculated volume of change in that cell to say that the signal is greater than noise. This is an admittedly conservative way to pose the problem, however, it allows a user to contextualize volumetric estimates. Just because changes calculated may be less than this volume (or threshold thickness), doesn't mean they didn't happen. We just argue we cannot differentiate them from noise.
So % error is nothing more than error volume divided by change volume. This is done indepently for surface raising volumes, surface lowering volumes, surface difference volumes and total difference volumes.
Be aware that if you use probabilistic thresholding, a 68% confidence interval is basically the exact same result as using propagated errors. 90 or 95% confidence intervals are rather conservative (i.e. throwing away a lot of information, and not necessarily better). We suggest 80% CIs as a useful rule of thumb for a default and there is some justification in the literature for this balancing not discarding useful information or being to liberal.
If you use a confidence interval above 68%, this means is that your % error surface lowering volumes and % error surface raising volumes will always be less than the respective erosion or deposition volumes (i.e. % error volume is less than that volume). If you use thresholds less than 68%, this does not have to be true and means that you can get % error volumes > 100%. This doesn't mean they are bad, just that you are accepting way more uncertainty in reporting values below a more liberal threshold.
Another thing that throws people off is that % error volumes on net change and total change (i.e. some of the lowering and raising % volumes divided by that net or total volume quantity) can frequently be > 100% (even when using thresholding > 68%). This IS NOT a problem. In most fluvial settings, we are way more likely to have net changes that are close to zero or not that imaballanced. As such, when a net % volume error is > 100% it just means the net signal is not clearly differentiated from noise, even if the individual erosion and deposition volumes and maps are clear. If you think about it, a big depositional wave or incision event will tend to have a much clearer net signal.
Clear as mud?
Closing inactive issue.
From a user: