Closed DeirdreLoughnan closed 2 years ago
The problem appears to be that positive betaTraitCue values lead to positive betaCueSp values unless the alphaCueSp is very high and balances it.
Doesn't that just mean the model is non-identifiable? I am not sure we can fix it then ... can we? (Though I am not sure. BTW, are we talking about our data or the test data?)
I think there might be a problem with my simulation code, because when I ran the model for LNC with wide priors allowing for very negative muCueSp values, the modle ran fine and allowed a positive betaTraitxCue value, like we were hoping.
Bulk effective sizes are too low, but otherwise SLA also runs, and we see positive effects for chilling and forcing, but a slightly negative one for photoperiod
Seed mass is ok, and direction of trait/cue response relationship is unchanged.
I tried to run the height data, but I cant find the input file height_subsampled.csv. I pushed the updates priors though in case someone with this file wants to run that model. I also haven't managed to put any model output on midge because of a permissions issue.
@FaithJones Annoying on the test data, but exciting we seem to be able to improve things for the real data... have a good weekend!
@FaithJones @legault @lizzieinvancouver I just pushed new figures! It is interesting to see that
The cues slopes are now in the direction that makes more sense to me.
And the figure of the slopes is also now more in line with what we predicted. Surprisingly the effect of height of forcing is now positive though.
@DeirdreLoughnan @FaithJones Nice! Excellent catch and fix. Someday we might also re-make the ones that decompose the effect into the two parts and put them together (the ones from the retreat), though not critical.
We fixed this!
We realized we might be having some issues with the traitors model for our leaf traits. Unlike with height and seed mass, we actually predict SLA and LNC to have a positive slope with high values being less responsive to cues. But our model seemed to struggle to model any of the variation of SLA onto the slope and it was mostly loaded onto the intercept.
@FaithJones has been looking into it and might have found the issue. The problem appears to be that positive betaTraitCue values lead to positive betaCueSp values unless the alphaCueSp is very high and balances it. Our current priors do not allow betaCueSP values to be negative, however, since alphaCueSp is constrained to be close to 0.
@FaithJones tried widening the priors on alphaCueSp to -30, allowing the alphaCueSp value to balance a positive relationship between cue and trait. But this produced a lot of divergent transitions and multi modality.
After chatting, I tried running the model with a more moderate prior of -15. This model did not produce any divergent transitions, but 8000 iterations exceeded max tree depth and the Rhat were bad. I am also getting modality in the model output.