sor16 / bdrc

Fits a discharge rating curve based on the power-law and the generalized power-law from data on paired water elevation and discharge measurements in a given river using a Bayesian hierarchical model as described in Hrafnkelsson et al. (2022)
https://sor16.github.io/bdrc/
Other
12 stars 2 forks source link

Consider adding measurement uncertainty #3

Open thodson-usgs opened 1 year ago

thodson-usgs commented 1 year ago

Very nice algorithm, and though we'd want to test it more, I consider it among the best candidates for operational use. To that end, one of the main requirements communicated to me was the algorithm should incorporate information about measurement uncertainty if it is available. That's probably an easy thing to add, but I have no time to contribute at the moment.

I'll also point out that discharge uncertainty is typically reported as +- X%. IMO, it would be fine to convert those to geometric errors (*/), since your model is fitting in log.

sor16 commented 1 year ago

Thank you for your kind words and for adding this issue. Adding measurement uncertainty is an interesting addition we could explore. Does the discharge measuring device report these uncertainties directly (using +- %)? Do you have any example data where measuring uncertainty is recorded? Would be useful to get a sense of the magnitude of the uncertainty.

We'd be happy to be of assistance in your testing of the method. Let us know if any other issues/questions arise. Cheers

thodson-usgs commented 12 months ago

https://github.com/thodson-usgs/ratingcurve/blob/main/ratingcurve/data/green_channel.csv

In that case, I believe q and q_sigma (one sigma) have units cfs. It's easy to convert back and forth between % or normal units, and I haven't observed one to be standard.

I think 5-7% (two sigma) is typically cited for "good" measurements. However, I suspect the more important use case is allowing users to ascribe different uncertainty to some measurements. For example, measurements at very high flows are particularly problematic. On one hand they are difficult to measure, and the technician may not be able to collect a complete transact. And the most extreme events are infrequent so the rating may rely on very old observations. Either uncertainties can be quite large, yet these are some of the most critical data, and we need ways to prescribe appropriate uncertainties for them when developing a rating.

thodson-usgs commented 12 months ago

Another application would be to handle shifts in the rating over time. For example, we might update the uncertainty through time to give greater uncertainty to older measurements. That way older measurements are given less weight when we develop a new rating.

thodson-usgs commented 12 months ago

Perhaps this a better example of my first point where the upper part of the rating is constrained by relatively few and very uncertain datapoints. https://github.com/thodson-usgs/ratingcurve/blob/main/ratingcurve/data/provo_natural.csv

sor16 commented 12 months ago

Okay that makes sense, thank you for clarifying and also for providing some example data. We've been discussing this internally and been throwing around some ideas on how to best include this in our modeling framework. Would be a great addition to the package. We'll let you know if we make some progress on implementing it. Cheers

RafaelVias commented 3 months ago

I will be working on this issue over the next weeks.