Open samlbickley opened 6 years ago
See this issue + answer for a way to fix K to a single number. Basically, you should pick a distribution for K600_daily_meanlog that is very narrow (tiny meanlog_sdlog) and centered (meanlog_meanlog) on 66/d. Definitions of K600_daily_meanlog_meanlog and K600_daily_meanlog_sdlog are available at ?specs
(online at https://rdrr.io/github/USGS-R/streamMetabolizer/man/specs.html)
A useful discussion of the relationships among various forms of k, k2, K, K600, etc. can be found in Raymond et al. 2012 (https://doi.org/10.1215/21573689-1597669).
Thank you, Allison! That cleared things up for me.
What if I don't want to set K to a single number, but still used previously measured values to estimate K? Is that possible or is that exactly what I've done?
Hi Sam, it's a bit tricker to set K to multiple numbers, but it can be done. If you have distinct Q on each day or are willing to spoof your Q inputs, you can set up a very strong prior with a multi-node K~Q relationship relating your measured K values to the daily mean Q values on those days. This would mean using a Kb model, i.e., pool_K600='binned'
. Spoofing Q should be fine because Q, unlike depth, is only used to as a predictor of K if it's used at all.
Thank you, Alison. You've been very helpful.
I think I closed this issue prematurely.
I am ignorant of what a multi-node relationship is. There doesn't appear to be any type of relationship between my K and Q values, however. Without a strong relationship between the two, is it possible to develop this multi-node relationship? Are there any example codes of setting up the K ~ Q relationship?
By multi-node relationship I meant a piecewise linear relationship between K and a variable named Q or discharge, which is what's provided by a b_Kb*_...
model (pool_K600='binned*'
in mm_name()
). See https://agupubs.onlinelibrary.wiley.com/doi/full/10.1002/2017JG004140 figure 5b for some examples.
Rather than passing in actual discharge values, you can pass in arbitrary values. For example, you can set Q to 1 for day 1, 2 for day 2, etc. in your data_daily
that you're passing to metab()
.
You can specify as many values of K600_lnQ_nodes_centers
as you want in specs()
. Set one value for every value of Q - in the above example, this would be log(1:n)
where n
is the number of days you're modeling.
You can set a tight prior on the value of K600
at each lnQ_node_center
. Set K600_lnQ_nodes_meanlog
to a vector of values as long as K600_lnQ_nodes_centers
where the values are the logs of your measured/estimated values of K600 on each day.
Set a very small value for K600_lnQ_nodes_sdlog
to encourage the model to stick tight to the values of K600_lnQ_nodes_meanlog
you set.
You might need to relax K600_lnQ_nodediffs_sdlog
(make it a higher value); this could involve some experimentation to see if it matters, or you could just make it really high from the outset.
Use pool_K600='binned_sdzero'
if you want to [pretty nearly] fix the daily K600 values at the values you provide.
See ?specs
for more information on each of the above specifications/parameters.
I've never done this before (my focus has always been on learning rather than fixing K) and don't have time to work up example code, but if you can get the above instructions to work (I really think they will), please do share your code here so that others can benefit. Thanks, Alison
I've attached my code, lnK and daily_Q files, out model outputs.
First, thank you for you help. I was able to get the model to run, but the estimated DO is quite wonky and I'm getting negative GPP estimates.
I've got low-GPP sites, and the model gives negative GPP for nearly all of my streams. I was told by AJ Reisinger that negative GPP on all dates probably indicates ER estimates are not reliable and that there's just not enough GPP to model metabolism. We have measured K values from previous work, so I fixed K600 to the mean of measured K600 (thank you for your help!) and was able to get positive GPP and ER rates that are within the range previously measured. If I've correctly followed your instructions and am still getting negative GPP and basically unmodellable days, should I just stick with my fixed k models, which is giving me reasonable estimates?
library(streamMetabolizer)
setwd("C:/Users/sam/Google Drive/School/Working/METABOLISM/streamMetabolizer/INPUT/")
data<-read.csv("sb4_spring2018.csv", stringsAsFactors = FALSE)
data$solar.time<- as.POSIXct(data$solar.time, tz='America/New_York')
lubridate::tz(data$solar.time)
data$solar.time <- streamMetabolizer::calc_solar_time(data$solar.time, longitude=-85) #FBMI longitude
calc_light(data$solar.time, 32, -85)
library(unitted)
data$light <- calc_light(u(data$solar.time), u(32, 'degN'), u(-85, 'degE'), u(3004, 'umol m^-2 s^-1'), attach.units=TRUE)
Q_daily<-read.csv("Q_daily.csv", stringsAsFactors = FALSE)
#Rather than passing in actual discharge values, you can pass in arbitrary values.
#For example, you can set Q to 1 for day 1, 2 for day 2, etc. in your data_daily that you're passing to metab().
Q_daily$date <- as.Date(Q_daily$date,format="%Y-%m-%d")
lnk_input<-read.csv("sb4_lnK.csv", stringsAsFactors = FALSE)
#You can set a tight prior on the value of K600 at each lnQ_node_center.
#Set K600_lnQ_nodes_meanlog to a vector of values as long as K600_lnQ_nodes_centers
#where the values are the logs of your measured/estimated values of K600 on each day.
lnk<-lnk_input[,'HEADER']
#make dataframe a vector
bayes_name <- mm_name(type='bayes', pool_K600="binned", err_obs_iid=TRUE, err_proc_iid=TRUE)
bayes_specs <- specs(bayes_name, K600_lnQ_nodes_centers = log(1:10), K600_lnQ_nodes_meanlog = lnK, K600_lnQ_nodes_sdlog = 0.00001, K600_lnQ_nodediffs_sdlog = 1000)
mm <- metab(bayes_specs, data=data, data_daily=Q_daily)
Thanks for sharing this code, Sam.
I'm only seeing 9 values in sb4_lnK.csv, 8 of which get read by read.csv as non-header rows, but K600_lnQ_nodes_centers has length 10...is it possible that file is out of date relative to your code, or vice versa?
The K600.daily predictions don't look like a great match to exp(lnK), though maybe it's because I'm not looking at the right lnK values...is that what you found, too? That said, the match seems close enough that I'm surprised that you're getting negative GPP with this model but positive GPP (significantly above 0?) with a Bayesian model. Hmm.
If by "my fixed K models" you mean 2-parameter MLE models, I think it's plausible that those would work better for you. A major advantage of the Bayesian models is usually their ability to help you estimate K, so if you already have those numbers, the advantages relative to MLE models drop off substantially. (I do still expect some advantage because the Bayesian models can be state-space, including both process error and observation error, but K may well be the bigger issue for your data.) I would just caution against basing that decision entirely on whether your GPP values are coming out as you hope or not...that doesn't come across as especially sound science, and as AJ said, you need some signal (GPP relative to K) to pull truth out of a timeseries. If you have very high confidence in your outside estimates of K600, that's a better reason to use a fixed-K model than that the GPP is coming out positive.
Also, are you sure K is as high as you say, and in the right units? Those DO predictions are abysmal, and I'm surprised the model couldn't at least find a better fit to the data even with high uncertainty in estimated GPP and ER values...
Also, you may need to run these Bayesian models for a longer warmup period; it looks from your traceplots like they don't all converge until later in the saved-steps period.
1) There are 9 measured K values and I'm attempting to model 10 days of metabolism, so I assume this is the reason for the discrepancy. I have adjusted the number of modeled days to reflect the number of measured K values.
2) Could you explain the importance of exp(lnK)? I don't understand your comment.
3) My GPP estimates are not significantly above zero, The average GPP when I fixed K600 to a mean of measure K600 was 0.120825. I agree with your statement about sound science and not basing my decision on positive GPP alone, especially with GPP values so low.
4) My "fixed k model" is a Bayesian model. Should I run the "fixed" model as an MLE model? The "fixed k" model I ran is the following code:
library(streamMetabolizer)
setwd("C:/Users/slb0035/Google Drive/School/Working/METABOLISM/streamMetabolizer/INPUT/")
data<-read.csv("sb4_spring2018.csv", stringsAsFactors = FALSE)
data$solar.time<- as.POSIXct(data$solar.time, tz='America/New_York')
lubridate::tz(data$solar.time)
data$solar.time <- streamMetabolizer::calc_solar_time(data$solar.time, longitude=-85) #FBMI longitude
calc_light(data$solar.time, 32, -85)
library(unitted)
data$light <- calc_light(u(data$solar.time), u(32, 'degN'), u(-85, 'degE'), u(3004, 'umol m^-2 s^-1'), attach.units=TRUE)
bayes_name <- mm_name(type='bayes', pool_K600="normal", err_obs_iid=TRUE, err_proc_iid=TRUE)
bayes_specs <- specs(bayes_name, K600_daily_meanlog_meanlog=log(3.477262235), K600_daily_meanlog_sdlog=0.00001)
#the above sets k600 to fixed value and sdlog keeps variation small
mm <- metab(bayes_specs, data=data)
5) I'm confident in my measured rates of reaeration rates, but I haven't had anyone check my calculations yet . Here are my K to K600 calculations, based on Raymond et al. 2012 and the formulas referenced therein.
6) I doubled burn-in and saved-steps and got better MCMC convergence, but the DO predictions are just as terrible.
How you would you recommend going forward with such low-productivity streams? With GPP so close to zero (both positive and negative), can I say anything about metabolism in these systems with any confidence?
I have previously measured reaeration rates that I would like to use as priors, but my problem is I don't know how to set that in the code. I have a k rate of 66/day. In issue #367, Bob Hall said to use the following specs:
K600_daily_meanlog_meanlog = 0 K600_daily_meanlog_sdlog =1
I don't have a background in Bayesian models, so I don't quite understand what these specs mean. In issue #367 , I believe the person had very low k600, so I interpret the specs given by Bob would limit the modeled k600 between 0 and 1. Is that what's going on?
Furthermore, what is the best way to convert k to k600? I assume that my measured reaeration rate isn't the standardized gas exchange rate and must be converted somehow.
tldr: I have measured reaeration rates that I would like to use as priors for K600 but don't know how to specify that properly in the code.
Thanks!