lizzieinvancouver / ospree

Budbreak review paper database
3 stars 0 forks source link

Work on phenology-trait joint model #390

Closed DeirdreLoughnan closed 3 years ago

DeirdreLoughnan commented 3 years ago

@FaithJones, @dbuona, @legault, & @DarwinSodhi. We need to continue to develop the phenology-trait model, adding more realistic priors, and eventually running on real data.

I look forward to hearing the updates!

DeirdreLoughnan commented 3 years ago

I have been working on the traitors model, but could use some help!

@legault would you be able to take a look at the simulation code and the priors?

To summarize, after we looked over the model code together, and agreed it looked good, I changed the priors to match the test data, have tried tighter priors, and then increased the size of the test dataset. It also looked to me as though the test data was correct, but maybe I am missing something.

Some of these changes seemed to make the model worse. Currently it takes about 3.5 h to run and produces 62 divergent transitions.

legault commented 3 years ago

Yes, I had a look today and pushed some changes that lead to better fits

lizzieinvancouver commented 3 years ago

@legault Thank you!

@DeirdreLoughnan -- you can see changes here. It looks like maybe your priors were so off it upset the model? Based on this you might make plots that compare your priors with each posterior for each parameter (one on top of of the other) part of your workflow for all your models. If your posterior is close to either edge of your prior it suggests your prior is constraining your posterior -- that's fine if you have a strong prior, but we usually don't.

Remember that in most cases we don't have strong priors, so bigger is better in that way (but not crazy big).

legault commented 3 years ago

Yeah, I think tight priors are important for calibrating the modeling machinery (i.e., can it recapitulate known parameter distributions?), but ultimately for data we'd want to start with bigger priors (unless we have some expert knowledge that might tighten them).

DeirdreLoughnan commented 3 years ago

@lizzieinvancouver @legault I ran the traitors model for a larger number of species (Nspp = 100) and made some of the variances smaller. The code to generate the data can be found here.

Some of the estimates are bad, but I ran this model with just 4000 iterations and no additional conditions and it produced errors associated with low ESS etc. Running the model with so many species and additional conditions on the tree depth will take a very long time.

Screen Shot 2021-05-10 at 12 17 59 PM

Is there anything that I am misunderstanding that might be an easier fix? Thanks for your help!

FaithJones commented 3 years ago

@DeirdreLoughnan I took a look at your code and have a few suggestions:

  1. Maybe try plotting your simulated data to make sure it looks sensible. This could help in case your simulation code as an error somewhere that is not obvious.
  2. Prior predictive checks should really help too. At the very least, you should get predicted trait and phenology y values from the two sub-models and overlay your predicted data. You are welcome to reach out to me again about this.
  3. when you simulate your code, alpha.trtsp is centred around 0. This means the model asks how each species' deviation from a general trait trend affects it's phenological response. See lines 38 and 39 of your R script. But in the Stan model you have mu_grand + muSp[isp] (line 93 of the Stan file). In this case the model is asking how a species' specific trait value, rather than it's offset, affects it's phenological response. We discussed both options in our meetings so I can see why you got confused. For now stick with the structure in your simulated code - it's the offsets we are interested in rather than absolute trait values.
DeirdreLoughnan commented 3 years ago

Thanks @FaithJones I have set aside some time tomorrow to work on this and will follow up if I have any questions.

@lizzieinvancouver also suggested setting betatraitxpheno to zero and to try running the test data using linear models to see if it is the data that is the issue. Lizzie said she will work on this a bit next week too!

DeirdreLoughnan commented 3 years ago

@FaithJones to clarify your third point above, are you saying that line 93 should just be: betaForceSp[isp] = alphaForceSp[isp] + betaTraitxForce * muSp[isp]

FaithJones commented 3 years ago

@DeirdreLoughnan yes, remove the grand alpha for now

DeirdreLoughnan commented 3 years ago

@lizzieinvancouver @legault @FaithJones I have given this my best, but need some help. I have tried running the two models using linear models on lines 236-256 of the simulation code. I also tried simulating the data and plotting the densities on lines 291-368.

@legault I also plotted simulations using the test data values against simulations using the generated model values (lines 291-460) as you suggested. The values of the predicted phenology and predicted traits are very different, but it might be an issue with my simulating code.

It would be great to have others look over what I have done and help me with the interpretation and figuring out where to go from here.

lizzieinvancouver commented 3 years ago

@DeirdreLoughnan First, a friendly reminder to make sure shared code runs for everyone, it could be my computer but more likely a missed package if I had to guess:

> mu.grand.prior <- rtruncnorm(1000, a = 0, b = 100 ,mean = 20, sd = 1) # since heights can't be negative, I am making this truncated
Error in rtruncnorm(1000, a = 0, b = 100, mean = 20, sd = 1) : 
  could not find function "rtruncnorm"
lizzieinvancouver commented 3 years ago

@DeirdreLoughnan Next .... wow! You have done so much work on this code. I made comments in the code, you can see them here if you look at what changed.

Your trait code looks fine to me, though I am not 100% committing to that as I didn't check it as fully as I would like (i.e., write a little Stan model and check the output). I think your next step should be to make up phenological data without any TraitXCue effects (you could just comment the whole chunk out) and make sure you can recover the slopes and intercepts in lmer before adding more complexity.

For big code like this you have to be very dull and work on each piece and test it as you go. I think you may not have enough of that for such big code so I suggest we back up and build in a series of checks before we add in the traitxcue stuff.

I made a lot of little comments, many of them are probably NOT the code's actual issue but I are things I might do differently to avoid issues.

Happy to look at this again once you look at the comments! I am in the field most of next week but can make time to chat if you need.

lizzieinvancouver commented 3 years ago

@DeirdreLoughnan I had a couple more thoughts on your sim code ... one is that could use a fixed effects model to test your trait data (trait measure ~ Species + study) as well.

The other is that you may be getting lost in the math of the 'joint model,' which would be best explained by some Tex code. Remember that when doing the joint trait model I may write it as two lines:

y{pheno, i} &= \alpha{pheno, sp[i]} + \beta{forcing{sp[i]}}Fi + \epsilon{pheno, i} \beta{forcing{sp}} & = = \alpha{forcing{sp}} + \beta_{trait x pheno}\alpha_{trait, sp}

But you could just as easily write it without \beta{forcing{sp}} ...

y{pheno, i} &= \alpha{pheno, sp[i]} + (\alpha{forcing{sp}} + \beta{trait x pheno}*\alpha{trait, sp})*Fi + \epsilon{pheno, i}

lizzieinvancouver commented 3 years ago

I am wondering if you are trying to code \beta{forcing{sp}} as well as \alpha{forcing{sp}} and \beta{trait x pheno}, even though \beta{forcing{sp}} is just a combo of \alpha{forcing{sp}} and \beta{trait x pheno}...?

DeirdreLoughnan commented 3 years ago

Thanks @lizzieinvancouver for looking over the code. While I am still working on getting the phenology only model working, I have few clarifying questions and responses to your comments:

  1. Your comment on line129 about what is the difference between lines 103-118 and 120 - 144. In the first chunk of code I am simulating the cue values, so the F_i, C_i, and Pi value. The second chunk of code is calculating the species level effect of forcing. The mu and the sigma values for each cue are the mean and variance that are used for the distribution used to simulate the alpha{forcing{{sp}} that in turn is used to calculate the beta{forcing_{sp}}.

  2. I have definitely been lost in the model’s math at times, but I don’t think I am currently. In the calculation of ypheno (line 170), I am only including the beta{forcing_{sp}} value, which I calculate above on line 154 as:

beta{forcing{sp}} = \alpha{forcing{sp}} + mu_{traitsp} * \beta{trait_forcing}

If this seems wrong, then perhaps we can meet to discuss it sometime this week.

lizzieinvancouver commented 3 years ago

@DeirdreLoughnan Ah! Okay, I think I am getting lost in the terminology! I looked at your code again and -- as best I can tell on quick glance it looks okay there.

I see the fixed effects model was not so helpful for getting species level estimates for your traits part (to check the code quicker) ... did you treat species and study as numeric? The aim would be to treat them as factors (as that's how Stan will eventually).

lizzieinvancouver commented 3 years ago

@DeirdreLoughnan Your next steps of looking at Faith's code sounds fine ... I think you can also work on:

I am trying to poke around today as I have time.

lizzieinvancouver commented 3 years ago

@DeirdreLoughnan Can you please check the code you just pushed? It's generated new errors in several model spots (reminder, please take the time to RUN code before you push it).

DeirdreLoughnan commented 3 years ago

@lizzieinvancouver are you getting an error from the stan code? I had changed the study and species to factors, to test whether that changed the linear model with fixed effects. I didn't realize that would mess up the stan code.

Sorry, I just commented out those lines and re-pushed the code.

lizzieinvancouver commented 3 years ago

@DeirdreLoughnan I was getting lmer and stan errors, but it looks to be running now. Thanks!

lizzieinvancouver commented 3 years ago

@DeirdreLoughnan So, can you confirm if your fake data model is returning the parameters correctly, but slowly and with lots of model fit issues?

From looking at the chains, it looks like mugrand is especially struggling ...

unhappy

Here's a better chain:

happy

Can you confirm in your trait-only Stan model that all the chains were well mixed?

lizzieinvancouver commented 3 years ago

Wait! I just fixed the images, they were reversed.

DeirdreLoughnan commented 3 years ago

@lizzieinvancouver to clarify, you are referring to the full joint model here? If so, then I would say that the fake data is returning the values well for the traits portion, but not for the betacuexTrait factors or really for the cue values. But yes, it runs pretty slow and has some fitting issues. In the some of the earlier iterations, the chains did look a lot better for mu_grand than is in this plot.

I just re-ran the traits only model with 8000 iterations and a max treedepth of 15, I still get the error message that bulk and tail effective sizes are low and running for more iterations would help. And yes, the chains are still not well mixed.

Does this suggest that there are issues with how we have defined the parameter space and the model is having trouble searching it?

lizzieinvancouver commented 3 years ago

@DeirdreLoughnan Yes, I was looking at the full joint model.

I think this means something is wrong. For a model like this (not super complicated, not non-linear) I would not be upping iterations and playing with control -- I really only do that when I am using real data, honestly. Do not use those tricks when testing your model code. So we should be troubleshooting a deeper issue. Here's the order of events I suggest (and I recommend these over comparing your code to Faith's as having other code to look at is not as helpful as learning to troubleshoot):

  1. Make a nice, compact version of your R file that runs just your traits model. Make sure you can get nice results back that match your parameters and that the chains look well mixed. Ideally this should not take >4,000 iterations.
  2. Assuming that works, make up a forcing-only phenology model and check via lmer, rstan or such that your code for phenology alone is correct (maybe make a separate R file for this also).
  3. Merge the two models only once 1 and 2 are done.
  4. If all goes well we could add in chilling and photoperiod. To do this, go back and do step 2 (adding chill and photo) then merge.

Feel free to post plots, summaries, code etc. here so I can see how things look!

DeirdreLoughnan commented 3 years ago

@lizzieinvancouver the new code for the trait only model is here and the associated stan code here. I have been working on the traits only model and included a lot of notes to keep track of what changes I made. I think the current model is pretty good. The model runs with only 4000 iterations, does not produce any warning messages, and chains are mixing: Screen Shot 2021-06-11 at 10 32 10 AM

The n_eff are a little low for muSp and muStudy, but I think they are at the low end of what is acceptable. And I think the model estimates are pretty good: Screen Shot 2021-06-11 at 10 31 43 AM

I did try running the model with more species and more studies (30 of each) and then I did get back a warning message that the Bulk ESS is too low. I had thought that increasing the number of studies and species would have fixed this issue, not make it worse. What other issues might this suggest?

DeirdreLoughnan commented 3 years ago

I have also worked on a forcing only model. The model runs with 4000 iterations and does not produce any warning messages and the chains mix well. Initially I ran the model with 20 species and while the estimates were close, the estimate for mu_phenosp were slightly off: Screen Shot 2021-06-14 at 1 22 14 PM

@lizzieinvancouver suggested increasing the number of species. This produced the following output:

Screen Shot 2021-06-14 at 2 22 04 PM

I also tried increasing the muPhenoSp prior variance to 20 since it seemed small. This did result in worse estimates for the forcing parameters: Screen Shot 2021-06-14 at 1 23 35 PM

lizzieinvancouver commented 3 years ago

@DeirdreLoughnan Thanks! Can you do a run with the change in priors with the old number of species? It's hard to compare model output when we change two things at once.

DeirdreLoughnan commented 3 years ago

@lizzieinvancouver I updated the model comment above, just increasing the Nspp does bring the mu_phenosp closer to the estimate and I don't think the rest of the values look too bad.

lizzieinvancouver commented 3 years ago

@DeirdreLoughnan Thanks! Looks good to me also. Onward to the joint (forcing only) model I think!

DeirdreLoughnan commented 3 years ago

@lizzieinvancouver I have been playing around with different values for mu.pheno.sp, specifically the more realistic prior of (30,10) and a larger Nspp: Screen Shot 2021-06-14 at 4 13 30 PM Given that the mean is much smaller, a difference fo ~2.6 seems worse.

lizzieinvancouver commented 3 years ago

@DeirdreLoughnan I don't think you can scale it that directly ... the big test is to up spp number and/or reps and see if it gets better.

DeirdreLoughnan commented 3 years ago

Thanks @lizzieinvancouver! I have moved on to working on the forcing only model and associated Stan code.

The only change I made to the code was to increase the number of species in the trait model to also be 30. The model runs fairly fast with 4000 iterations and does not produce any warnings. The chains are well mixed and the n_eff and Rhat look good. The model output also looks surprisingly great: Screen Shot 2021-06-15 at 10 00 45 AM

lizzieinvancouver commented 3 years ago

@DeirdreLoughnan That looks amazing. Way to go!

If I were you, my next step would be to add chill and then photo (one at a time) to this model. If it any point you see issues that you can't track down you should backtrack and add chill and then photo to the phenology-only model (then build back up). Make sense?

DeirdreLoughnan commented 3 years ago

Thanks @lizzieinvancouver!

As a quick update, I have been working on adding chilling to the joint model. This did require me to go back to the phenology only model and work out the kinks, but I am now getting pretty decent estimates from a joint model with chilling: Screen Shot 2021-06-18 at 4 21 09 PM

Hopefully adding photoperiod will be easier!

DeirdreLoughnan commented 3 years ago

I added photoperiod to the test data and stan code , and estimates were pretty good. After experimenting with the priors for the betaTraitcues, these are the best estimates I could come up with: Screen Shot 2021-06-18 at 8 12 41 PM

The ones that I am concerned about are sigma_study, mu_forcesp, and sigma_photosp. But the chains are well mixed and the n_eff and Rhat look good.

lizzieinvancouver commented 3 years ago

Thanks @lizzieinvancouver!

As a quick update, I have been working on adding chilling to the joint model. This did require me to go back to the phenology only model and work out the kinks, but I am now getting pretty decent estimates from a joint model with chilling: Screen Shot 2021-06-18 at 4 21 09 PM

Hopefully adding photoperiod will be easier!

@DeirdreLoughnan This model looks very good to me!

lizzieinvancouver commented 3 years ago

I added photoperiod to the test data and stan code , and estimates were pretty good. After experimenting with the priors for the betaTraitcues, these are the best estimates I could come up with: Screen Shot 2021-06-18 at 8 12 41 PM

The ones that I am concerned about are sigma_study, mu_forcesp, and sigma_photosp. But the chains are well mixed and the n_eff and Rhat look good.

@DeirdreLoughnan Amazing work on this! It's not surprising to me that as we add parameters, the model even with real data struggles a little. Can you say why you are concerned about the sigma_study, mu_forcesp, and sigma_photosp parameters? Is it because they are not mixing super well in some way? (They look okay to me, but you are looking more deeply at the models.) I think again the thing to check would be (again) upping some of the species numbers and seeing if things improve.

DeirdreLoughnan commented 3 years ago

@lizzieinvancouver I think I did it! I was just concerned that the estimated value for mu_forcesp was so small (-1) that being off by 0.24 was considerable.

But I did increase the number of studies, and that did not really have an effect. But after increasing the number of species to 40, I can convince myself that the model is doing a pretty good job!

Screen Shot 2021-06-22 at 5 47 50 PM

I guess the next step is to get it running with real data!

lizzieinvancouver commented 3 years ago

I guess the next step is to get it running with real data!

I agree!

It's great that we have a sense of the models accuracy issues. Remember that if you have issues you can also build new fake data more similar to the real data (in terms of parameter values, species, studies and perhaps imbalance in rep numbers etc.).

lizzieinvancouver commented 3 years ago

@DeirdreLoughnan Check out the species list you get running this code but set to:

use.flags.for.allspp.utah <- TRUE

And all the other flags around there should be false.

FaithJones commented 3 years ago

I'm closing this issue because we have moved on to new issues