DeirdreLoughnan / Treetraits

0 stars 0 forks source link

Test data review #4

Open DeirdreLoughnan opened 10 months ago

DeirdreLoughnan commented 10 months ago

In light of recent issues with this trait model and the traitors model, I am redoing the test data for this project.

There are some key differences between the two modeling approaches that arise from me using my experiment data:

  1. My trait portion will only have 1 intercept for species, site will be included as dummy variables
  2. Phenology model will include photoperiod as dummy and forcing and chilling as continuous

To start, I ran the trait only part of the model alone to see if we are finding the same issues as with the traitors trait only model. Overall I think this model is running much better than the traitors model

     Parameter Test.data.values  Estiamte      X2.5     X97.5
1     mu_grand               10 10.408733 9.0447193 12.083832
2     sigma_sp                5  5.077752 4.1277105  6.307747
3         pop2                2  2.052788 1.9632187  2.144657
4         pop3                3  2.978069 2.8921410  3.061552
5         pop4                4  4.075013 3.9830496  4.155720
6 sigma_traity                1  1.002797 0.9805855  1.025084

The species level intercepts for a sample size close to the real data looks fairly good to me (Nrep = 8, Npop = 4, Nspp = 50): Screen Shot 2023-08-23 at 1 25 21 PM

Next I will combine it with the phenology model used in my growth chamber study, but without the phylogeny.

lizzieinvancouver commented 10 months ago

@DeirdreLoughnan Ah, how lovely! This looks good to me.

DeirdreLoughnan commented 10 months ago

@lizzieinvancouver the full joint model run fairly well, but produces a few kinda poor estimates, in particular the estimates for sigma_sp is off by 1.1, mu_phenosp is off by 4.2, mu_photosp off by 1, sigma_forcesp off by 0.6, but all within the 95 UI:

       Parameter Test.data.values    Estiamte        X2.5       X97.5
1       mu_grand             10.0  10.4441841   7.6460133  13.3216813
2       sigma_sp              5.0   6.1673466   4.4243863   8.7681425
3           pop2              2.0   1.9311370   1.6461068   2.2102440
4           pop3              3.0   2.9893503   2.7161513   3.2691316
5           pop4              4.0   3.9709713   3.6889872   4.2405484
6   sigma_traity              1.0   1.0210205   0.9500146   1.0989096
7     mu_forcesp            -10.0 -10.1009704 -11.6134198  -8.5305682
8     mu_chillsp            -14.0 -14.3923631 -15.2717498 -13.5422928
9     mu_photosp            -15.0 -14.0063419 -15.3118170 -12.6064038
10    mu_phenosp             80.0  75.8092541  64.3400733  87.7067775
11 sigma_forcesp              1.0   1.6469020   1.1691591   2.3764052
12 sigma_chillsp              1.0   0.9263713   0.6510505   1.3470959
13 sigma_photosp              1.0   0.7590249   0.1042704   1.6417310
14 sigma_phenosp             30.0  28.0666737  20.6209860  39.5719637
15  sigma_phenoy              3.0   3.0088378   2.9271198   3.0912800
16       beta_tf              0.3   0.3024894   0.1696252   0.4390084
17       beta_tc             -0.4  -0.3732745  -0.4433245  -0.2978961
18       beta_tp             -0.2  -0.2619117  -0.3773023  -0.1475535

The species level estimates from the trait part of the model are slightly worse than the phenology model estiamtes:

Screen Shot 2023-08-26 at 6 51 44 PM
lizzieinvancouver commented 10 months ago

@DeirdreLoughnan This looks promising to me! I don't think you can compare trait and sp so well because you have different values for them (by 10X). That's not a complaint -- just a reminder that when you look at the above plots you have to take the scale into consideration (and other differences you have in sample size would matter too). The only value that looks bad to me is mu_photosp... it's not great.

I suggest you up the reps and species number and check that estimates IMPROVE (post the above again). As long as that is happening and Faith doesn't catch anything, then I think it looks good!

DeirdreLoughnan commented 10 months ago

@lizzieinvancouver I agree, I just meant it as a sanity check that the species level model estimates were not too far off from the simulated values, not to compare the two parts of the model.

Increasing the number of species and replicates by a third does improve the model estimates, now mu_photosp is only off by 0.24.

       Parameter Test.data.values    Estimate        X2.5        X97.5
1       mu_grand             10.0  10.1605632   8.6710522  11.90982879
2       sigma_sp              5.0   4.6938167   3.5964846   6.24729560
3           pop2              2.0   2.0663416   1.9102283   2.22842689
4           pop3              3.0   3.0535842   2.8991202   3.21057612
5           pop4              4.0   4.1630543   4.0033034   4.32502282
6   sigma_traity              1.0   1.0022984   0.9644023   1.04198805
7     mu_forcesp            -10.0 -10.0310663 -11.0709772  -8.93302257
8     mu_chillsp            -14.0 -14.5954942 -15.7084225 -13.45009479
9     mu_photosp            -15.0 -15.2438077 -16.3171013 -14.27234427
10    mu_phenosp             80.0  87.2091821  75.7682370  98.99164329
11 sigma_forcesp              1.0   1.1657436   0.8815548   1.55382457
12 sigma_chillsp              1.0   1.0899414   0.7762494   1.49716057
13 sigma_photosp              1.0   1.0779306   0.8070464   1.44256121
14 sigma_phenosp             30.0  33.8818060  25.9555789  44.43130266
15  sigma_phenoy              3.0   3.0236932   2.9646841   3.08394045
16       beta_tf              0.3   0.2991830   0.2004204   0.39773297
17       beta_tc             -0.4  -0.3412474  -0.4467813  -0.24280313
18       beta_tp             -0.2  -0.1673943  -0.2543025  -0.07252499
Screen Shot 2023-08-28 at 10 38 14 AM

Doubling the number of species and replicates also improves the estimates of the beta_trait cues but the mu_photosp is off by a bit more:

       Parameter Test.data.values    Estimate        X2.5       X97.5
1       mu_grand             10.0   8.9267986   7.2861606  10.5361421
2       sigma_sp              5.0   5.1878103   4.1745135   6.5623880
3           pop2              2.0   2.1629220   2.0655901   2.2614118
4           pop3              3.0   3.0575121   2.9593887   3.1592143
5           pop4              4.0   4.0194503   3.9210913   4.1178029
6   sigma_traity              1.0   0.9992280   0.9754701   1.0243269
7     mu_forcesp            -10.0  -9.8318857 -10.7635660  -8.8773510
8     mu_chillsp            -14.0 -14.6297820 -15.3424317 -13.9343588
9     mu_photosp            -15.0 -14.6205509 -15.1389477 -14.0788621
10    mu_phenosp             80.0  80.0859711  71.8293629  88.5917402
11 sigma_forcesp              1.0   1.1481956   0.7420146   1.6217848
12 sigma_chillsp              1.0   1.0471821   0.8261831   1.3415233
13 sigma_photosp              1.0   0.8500683   0.6752768   1.0811460
14 sigma_phenosp             30.0  26.5404128  21.2623168  33.2160940
15  sigma_phenoy              3.0   2.9947200   2.9581781   3.0304403
16       beta_tf              0.3   0.2950475   0.1989360   0.3891516
17       beta_tc             -0.4  -0.3602279  -0.4285898  -0.2941492
18       beta_tp             -0.2  -0.2375688  -0.2886307  -0.1876610
lizzieinvancouver commented 10 months ago

@DeirdreLoughnan That could just be natural variation in MCMC output -- on quick glance this looks good to me!