GLEON / LakeMetabolizer

Collection of Lake Metabolism Functions
17 stars 10 forks source link

Should models agree more? #73

Open rBatt opened 9 years ago

rBatt commented 9 years ago

@lawinslow : we've talked about this a bit.

Using the simulated data at the end of ?metab produces the following:

funnymetabolizer copy

And here is the associated R code:


# =============
# = Data No K =
# =============
# fake data
    datetime <- seq(as.POSIXct("2014-06-16 00:00:00", tz="GMT"), as.POSIXct("2014-06-17 23:55:00", tz="GMT"), length.out=288*2)
    do.obs <- 2*sin(2*pi*(1/288)*(1:(288*2))+0.75*pi) + 8 #+ rnorm(288*2, 0, 0.5)
    wtr <- 3*sin(2*pi*(1/288)*(1:(288*2))+pi) + 17 #+ rnorm(288*2, 0, 0.15)
    do.sat <- LakeMetabolizer:::o2.at.sat.base(wtr, 960)
    irr <- (1500*sin(2*pi*(1/288)*(1:(288*2))+1.5*pi) +650) *ifelse(is.day(datetime, 42.3), 1, 0) #+ rnorm(288*2, 0, 0.25)) *ifelse(is.day(datetime, 42.3), 1, 0)
    k.gas <- 0
    z.mix <- 1

# put data in a data.frame
    data.noK <- data.frame(datetime=datetime, do.obs=do.obs, do.sat=do.sat, k.gas=k.gas,
     z.mix=z.mix, irr=irr, wtr=wtr)

# run each metabolism model
    m.bk.noK <- metab(data.noK, "bookkeep", lake.lat=42.6)
    m.ols.noK <- metab(data.noK, "ols", lake.lat=42.6)
    m.mle.noK <- metab(data.noK, "mle", lake.lat=42.6)
    m.kal.noK <- metab(data.noK, "kalman", lake.lat=42.6)
    m.bay.noK <- metab(data.noK, "bayesian", lake.lat=42.6)

    noK0 <- rbind(
        cbind(m.bk.noK, model="BK"),
        cbind(m.ols.noK, model="OLS"),
        cbind(m.mle.noK, model="MLE"),
        cbind(m.kal.noK, model="Kal"),
        cbind(m.bay.noK, model="Bay")
        )

    noK <- melt(noNoise.noK0, id=c("year","doy","model"), measure.vars=c("R","NEP","GPP"), value.name="metabolism")

# ===============
# = Data with K =
# ===============
    # fake data
        k.gas <- 0.4

    # put data in a data.frame
        data.withK <- data.frame(datetime=datetime, do.obs=do.obs, do.sat=do.sat, k.gas=k.gas,
         z.mix=z.mix, irr=irr, wtr=wtr)

    # run each metabolism model
        m.bk.withK <- metab(data.withK, "bookkeep", lake.lat=42.6)
        m.ols.withK <- metab(data.withK, "ols", lake.lat=42.6)
        m.mle.withK <- metab(data.withK, "mle", lake.lat=42.6)
        m.kal.withK <- metab(data.withK, "kalman", lake.lat=42.6)
        m.bay.withK <- metab(data.withK, "bayesian", lake.lat=42.6)

        withK0 <- rbind(
            cbind(m.bk.withK, model="BK"),
            cbind(m.ols.withK, model="OLS"),
            cbind(m.mle.withK, model="MLE"),
            cbind(m.kal.withK, model="Kal"),
            cbind(m.bay.withK, model="Bay")
            )

        withK <- melt(withK0, id=c("year","doy","model"), measure.vars=c("R","NEP","GPP"), value.name="metabolism")

        # ========
        # = Plot =
        # ========
        bp.at <- c(1:5, 8:12, 15:19)
        bp.cols <- rep(rainbow(5,),3)

        dev.new(width=7, height=3.5)
        par(mfrow=c(1,2), mar=c(2,2,1,1), ps=9, cex=1, mgp=c(1.5, 0.25, 0), tcl=-0.25)

        # No Noise, No K
        boxplot(metabolism~model+variable, data=noK, at=bp.at, lwd=5, pars=list(boxfill=bp.cols, medcol=bp.cols, staplecol=bp.cols), xaxt="n", main="No noise, no K")
        legend("topleft", legend=c("BK","OLS","MLE","Kal","Bay"), pch=22, pt.bg=bp.cols, inset=c(-0.025, -0.05), ncol=1)
        axis(1, at=c(3, 10, 17), labels=c("R","NEP","GPP"))

        # No Noise, with K
        boxplot(metabolism~model+variable, data=withK, at=bp.at, lwd=5, pars=list(boxfill=bp.cols, medcol=bp.cols, staplecol=bp.cols), xaxt="n", main="No noise, with K")
        legend("topleft", legend=c("BK","OLS","MLE","Kal","Bay"), pch=22, pt.bg=bp.cols, inset=c(-0.025, -0.05), ncol=1)
        axis(1, at=c(3, 10, 17), labels=c("R","NEP","GPP"))
rBatt commented 9 years ago

@lawinslow Here is the relevant part of the email I sent you regarding this issue (raised by N Lottig, so thanks to him!):

Testing/ experimenting I started looking into this a bit, because I figured that if I eliminated stochasticity, then the models that only differ in how they handle errors should converge. Specifically, MLE is pretty similar to Kalman and Bayes if you ignore the handling of noise. To make OLS similar to the MLE Kalman and Bayes, you could take out K, because the dynamics of DO are no longer state-dependent like they are when K is involved.

So I ran some experiments. I took the noise out of the simulated data, then ran 2 scenarios: one where K was 0, and one where K was 0.4.

Expectations A priori, I would expect:

With no noise, Kalman == Bayes == MLE. I would expect OLS to be somewhat close to those, too. Then I would expect BK to be a bit different, because it's model structure is so unique.

With no noise AND no K, I would expect Kalman == Bayes == MLE == OLS. This is b/c OLS has the same model structure as the dynamic models (updated at every time step, Kalman, Bayes, kinda MLE), but now w/o the K the per time step updates upon which the next prediction is formed should matter less.

Results: No Noise, No K Focus on NEP as a reference, patterns in other metabolic parameters are similar.

OLS == Kalman == Bayes. BK is close to these, but not quite the same. But MLE is like wtf low. This is confusing to me.

Results: No Noise, with K Again, focus on NEP.

Kalman == OLS ~~ BK. MLE is still wtf low. But now Bayes is higher ... why? I would have expected Kalman and Bayes to be the most similar pair of methods in this scenario.

It might be possible this has something to do with permitting uncertainty on K. The Bayes model is written so that the variance of K is 10% of the mean K value that day; except when K is 0, then the model says that the variance around K is 1E-9 ... so super tiny. K is not a parameter, so its value isn't fitted at all. However, by putting uncertainty on K, you allow for the estimates of DO at each time step to be more uncertain due to the influence of uncertain gas exchange. As a result, this should make the process variance higher, at least relative to the observation variance. In the case where K is 0, process variance should be more similar to the observation variance. Remember that the key in fitting the parameters isn't just the size of these variances, but their sizes relative to each other, because that tells the model how much to trust the model prediction vs. the observations when deciding what "true" DO is.

So, digging into the attributes of model output for Bayes, I see that with K obs variance = 0.008 and process variance = 0.021. In the no K scenario, obs variance = 0.012 and process variance = 0.029. So there isn't much change, but variances go up, and observation variance gets slightly closer to process variance. So maybe that's part of why Bayes differs from Kalman so much when K is turned on.

I cannot find any reason why MLE is so screwed up on its R. BTW, I didn't mention this yet I don't think: It is the R that is screwed up, and I'm not just inferring that from the NEP.

Model Resp
OLS -0.0069
MLE -0.0147
Kalman -0.0070
Bayes -0.0059
Model GPP
OLS 2.2 E-5
MLE 3.7 E-5
Kalman 2.29 E-5
Bayes 2.89 E-5

So the MLE R parameter is ~ double what it is for the others, whereas the MLE GPP parameter is only marginally higher.

Looking at a quick histogram of do.obs - do.sat for the simulated data shows that most of the time the fake DO data is undersaturated. On average, the water DO is 1.18 mg/L O2 lower than atmospheric equilibrium.

In the No K scenario, no net change in DO simply means that NEP should be 0 (all models except MLE basically get this answer). Int he scenario where K is not 0, things become a bit more complicated in this regard, but over time it should be clear that no net change in DO that is undersaturated must be maintained by negative NEP. So Bayes (and to a lesser extent BK) get this wrong. Kalman and OLS get this right, but MLE gets it a little too right.

If MLE was only showing hugely negative R in the with-K scenario, I would say that something is wrong in the MLE model where it's saying that the atmospheric exchange is too high, and this is being counterbalanced by really high R. But the fact that the problem exists without K is really stumping me.

Conclusion Something is wrong with MLE, I think. Bayes is also a little peculiar, both b/c it doesn't agree with Kalman, and because it says that NEP is positive even though DO is undersaturated.

I'm tempted to retest these models with a simplified version of the exchange – i.e., one where we don't do the fancy calculus between time steps, because that would make it much easier to ensure that something isn't screwed up in that part of the model.

A final proposed solution is that the optimization is incomplete. BK and OLS are just closed-form, and they consistently perform well. Kalman somehow produces reasonable answers, even though it uses optim() too. With Bayes, it may be that the chains aren't being run for long enough, and for MLE it could be an oddity with getting stuck in a crappy local minimum.

So my 2 guesses are 1) something is wrong with the models and I don't know what it is; 2) the variability among models is the result of shitty fitting of parameters.

jzwart commented 9 years ago

Wow, that's a little concerning. Thanks for bringing this up, Ryan (and Noah).

 If MLE was only showing hugely negative R in the with-K scenario, I

would say that something is wrong in the MLE model where it's saying that the atmospheric exchange is too high, and this is being counterbalanced by really high R. But the fact that the problem exists without K is really stumping me.

Yeah that is really weird. Did you ever explore trying different initial guesses or different optimization algorithms? This is a stumper and I can only think that there may be some little typo in the code buried somewhere...

 Something is wrong with MLE, I think. Bayes is also a little peculiar,

both b/c it doesn't agree with Kalman, and because it says that NEP is positive even though DO is undersaturated.

Bayes is really positive! Maybe worth reexamining the bayes code too.

I'm tempted to retest these models with a simplified version of the exchange – i.e., one where we don't do the fancy calculus between time steps, because that would make it much easier to ensure that something isn't screwed up in that part of the model.

This is a good idea and a quick check.

Kalman somehow produces reasonable answers, even though it uses optim() too. With Bayes, it may be that the chains aren't being run for long enough, and for MLE it could be an oddity with getting stuck in a crappy local minimum.

Or maybe something is wrong with the MLE code, typo, etc... I'll look around to see if I can spot anything obvious.

On Mon, Dec 1, 2014 at 8:37 AM, Ryan Batt notifications@github.com wrote:

Reopened #73 https://github.com/GLEON/LakeMetabolizer/issues/73.

— Reply to this email directly or view it on GitHub https://github.com/GLEON/LakeMetabolizer/issues/73#event-200558449.

Jacob A. Zwart University of Notre Dame Jones Lab & MFE 263 Galvin Life Sciences Notre Dame, IN 46556 269.370.2788