djhocking / Trout_GRF

Application of Gaussian Random Fields in a Dendritic Network
3 stars 0 forks source link

Unrealistic estimates with Poisson #2

Closed djhocking closed 9 years ago

djhocking commented 9 years ago

When using the Poisson (jnll = -dnorm(count_ip, N_i*detect_ip)) rather than a binomial/multinomial (e.g. count ~ bin(N, detect)), it's possible for the resulting abundance (N which is lambda_ip in the code) to be lower than the sum of the removal captures. This isn't ideal because we have data with blocknets so tresspass should be essentially 0 and the population should be completely closed so we know that the abundances is at least the sum of the captures (c_sum). In our case, 6 of the ~25 sites have estimated abundances lower than c_sum. It's generally quite close, sometimes even rounding would alleviate the issue, so I'm not overly concerned. But as a methods paper it would be worth thinking about some more.

Here's the results:

   c_sum         N_i problem pass_1 pass_2 pass_3
1      2  5.64692914   FALSE      2      0      0
2     19 16.59943823    TRUE     13      6      0
3      5 10.45871664   FALSE      4      1      0
4      4  5.87692710   FALSE      3      0      1
5     41 39.09965187    TRUE     36      5      0
6     60 67.33948789   FALSE     29     22      9
7     74 75.90026910   FALSE     54     16      4
8     29 30.81556512   FALSE     16      7      6
9      3  5.34360595   FALSE      2      1      0
10    21 20.97239012    TRUE     14      2      5
11    29 31.97828711   FALSE     16      5      8
12     1  3.13212761   FALSE      0      1      0
13    62 61.12344278    TRUE     48      9      5
14    20 21.74247106   FALSE     15      5      0
15     6  8.95806693   FALSE      4      2      0
16    23 23.47056684   FALSE     16      6      1
17    13 13.99877807   FALSE      9      3      1
18    16 16.59556384   FALSE      9      5      2
19     9 12.23351022   FALSE      4      1      4
20     7  3.77940218    TRUE      5      2      0
21     8  8.12085111   FALSE      7      1      0
22     1  1.00927762   FALSE      1      0      0
23     1  1.60131358   FALSE      0      1      0
24     1  0.03617079    TRUE      1      0      0
James-Thorson commented 9 years ago

So first I would try using the bias-correction tool for an unbiased estimate of N. Try running

SD = sdreport( obj, bias.correct=TRUE ) SD$unbiased$value

cheers, jim

On Fri, Jul 31, 2015 at 10:44 AM, Daniel J. Hocking < notifications@github.com> wrote:

When using the Poisson (jnll = -dnorm(count_ip, N_i*detect_ip)) rather than a binomial/multinomial (e.g. count ~ bin(N, detect)), it's possible for the resulting abundance (N which is lambda_ip in the code) to be lower than the sum of the removal captures. This isn't ideal because we have data with blocknets so tresspass should be essentially 0 and the population should be completely closed so we know that the abundances is at least the sum of the captures (c_sum). In our case, 6 of the ~25 sites have estimated abundances lower than c_sum.

Here's the results:

c_sum N_i problem pass_1 pass_2 pass_3 1 2 5.64692914 FALSE 2 0 0 2 19 16.59943823 TRUE 13 6 0 3 5 10.45871664 FALSE 4 1 0 4 4 5.87692710 FALSE 3 0 1 5 41 39.09965187 TRUE 36 5 0 6 60 67.33948789 FALSE 29 22 9 7 74 75.90026910 FALSE 54 16 4 8 29 30.81556512 FALSE 16 7 6 9 3 5.34360595 FALSE 2 1 0 10 21 20.97239012 TRUE 14 2 5 11 29 31.97828711 FALSE 16 5 8 12 1 3.13212761 FALSE 0 1 0 13 62 61.12344278 TRUE 48 9 5 14 20 21.74247106 FALSE 15 5 0 15 6 8.95806693 FALSE 4 2 0 16 23 23.47056684 FALSE 16 6 1 17 13 13.99877807 FALSE 9 3 1 18 16 16.59556384 FALSE 9 5 2 19 9 12.23351022 FALSE 4 1 4 20 7 3.77940218 TRUE 5 2 0 21 8 8.12085111 FALSE 7 1 0 22 1 1.00927762 FALSE 1 0 0 23 1 1.60131358 FALSE 0 1 0 24 1 0.03617079 TRUE 1 0 0

— Reply to this email directly or view it on GitHub https://github.com/djhocking/Trout_GRF/issues/2.

James-Thorson commented 9 years ago

Also, if its easier, you could also just add a column for the Standard error of log(N) for each row, and we could do a back-of-the-envelope delta-method correction to see if that fixes things.

jim

On Fri, Jul 31, 2015 at 11:02 AM, James Thorson James.Thorson@noaa.gov wrote:

So first I would try using the bias-correction tool for an unbiased estimate of N. Try running

SD = sdreport( obj, bias.correct=TRUE ) SD$unbiased$value

cheers, jim

On Fri, Jul 31, 2015 at 10:44 AM, Daniel J. Hocking < notifications@github.com> wrote:

When using the Poisson (jnll = -dnorm(count_ip, N_i*detect_ip)) rather than a binomial/multinomial (e.g. count ~ bin(N, detect)), it's possible for the resulting abundance (N which is lambda_ip in the code) to be lower than the sum of the removal captures. This isn't ideal because we have data with blocknets so tresspass should be essentially 0 and the population should be completely closed so we know that the abundances is at least the sum of the captures (c_sum). In our case, 6 of the ~25 sites have estimated abundances lower than c_sum.

Here's the results:

c_sum N_i problem pass_1 pass_2 pass_3 1 2 5.64692914 FALSE 2 0 0 2 19 16.59943823 TRUE 13 6 0 3 5 10.45871664 FALSE 4 1 0 4 4 5.87692710 FALSE 3 0 1 5 41 39.09965187 TRUE 36 5 0 6 60 67.33948789 FALSE 29 22 9 7 74 75.90026910 FALSE 54 16 4 8 29 30.81556512 FALSE 16 7 6 9 3 5.34360595 FALSE 2 1 0 10 21 20.97239012 TRUE 14 2 5 11 29 31.97828711 FALSE 16 5 8 12 1 3.13212761 FALSE 0 1 0 13 62 61.12344278 TRUE 48 9 5 14 20 21.74247106 FALSE 15 5 0 15 6 8.95806693 FALSE 4 2 0 16 23 23.47056684 FALSE 16 6 1 17 13 13.99877807 FALSE 9 3 1 18 16 16.59556384 FALSE 9 5 2 19 9 12.23351022 FALSE 4 1 4 20 7 3.77940218 TRUE 5 2 0 21 8 8.12085111 FALSE 7 1 0 22 1 1.00927762 FALSE 1 0 0 23 1 1.60131358 FALSE 0 1 0 24 1 0.03617079 TRUE 1 0 0

— Reply to this email directly or view it on GitHub https://github.com/djhocking/Trout_GRF/issues/2.

djhocking commented 9 years ago

bias.correct isn’t an option in the version of TMB library that I have (TMB_1_1 in R).


Daniel J. Hocking

http://danieljhocking.wordpress.com/


On Jul 31, 2015, at 2:03 PM, Jim Thorson notifications@github.com<mailto:notifications@github.com> wrote:

So first I would try using the bias-correction tool for an unbiased estimate of N. Try running

SD = sdreport( obj, bias.correct=TRUE ) SD$unbiased$value

cheers, jim

On Fri, Jul 31, 2015 at 10:44 AM, Daniel J. Hocking < notifications@github.commailto:notifications@github.com> wrote:

When using the Poisson (jnll = -dnorm(count_ip, N_i*detect_ip)) rather than a binomial/multinomial (e.g. count ~ bin(N, detect)), it's possible for the resulting abundance (N which is lambda_ip in the code) to be lower than the sum of the removal captures. This isn't ideal because we have data with blocknets so tresspass should be essentially 0 and the population should be completely closed so we know that the abundances is at least the sum of the captures (c_sum). In our case, 6 of the ~25 sites have estimated abundances lower than c_sum.

Here's the results:

c_sum N_i problem pass_1 pass_2 pass_3 1 2 5.64692914 FALSE 2 0 0 2 19 16.59943823 TRUE 13 6 0 3 5 10.45871664 FALSE 4 1 0 4 4 5.87692710 FALSE 3 0 1 5 41 39.09965187 TRUE 36 5 0 6 60 67.33948789 FALSE 29 22 9 7 74 75.90026910 FALSE 54 16 4 8 29 30.81556512 FALSE 16 7 6 9 3 5.34360595 FALSE 2 1 0 10 21 20.97239012 TRUE 14 2 5 11 29 31.97828711 FALSE 16 5 8 12 1 3.13212761 FALSE 0 1 0 13 62 61.12344278 TRUE 48 9 5 14 20 21.74247106 FALSE 15 5 0 15 6 8.95806693 FALSE 4 2 0 16 23 23.47056684 FALSE 16 6 1 17 13 13.99877807 FALSE 9 3 1 18 16 16.59556384 FALSE 9 5 2 19 9 12.23351022 FALSE 4 1 4 20 7 3.77940218 TRUE 5 2 0 21 8 8.12085111 FALSE 7 1 0 22 1 1.00927762 FALSE 1 0 0 23 1 1.60131358 FALSE 0 1 0 24 1 0.03617079 TRUE 1 0 0

— Reply to this email directly or view it on GitHub https://github.com/djhocking/Trout_GRF/issues/2.

— Reply to this email directly or view it on GitHubhttps://github.com/djhocking/Trout_GRF/issues/2#issuecomment-126772533.

James-Thorson commented 9 years ago

Looks like v1.4.0 has it and is an official release. You could also list SE[ log(N) ] for each site and we could inspect by eye.

jim

On Fri, Jul 31, 2015 at 11:27 AM, Daniel J. Hocking < notifications@github.com> wrote:

bias.correct isn’t an option in the version of TMB library that I have (TMB_1_1 in R).


Daniel J. Hocking

http://danieljhocking.wordpress.com/


On Jul 31, 2015, at 2:03 PM, Jim Thorson <notifications@github.com<mailto: notifications@github.com>> wrote:

So first I would try using the bias-correction tool for an unbiased estimate of N. Try running

SD = sdreport( obj, bias.correct=TRUE ) SD$unbiased$value

cheers, jim

On Fri, Jul 31, 2015 at 10:44 AM, Daniel J. Hocking < notifications@github.commailto:notifications@github.com> wrote:

When using the Poisson (jnll = -dnorm(count_ip, N_i*detect_ip)) rather than a binomial/multinomial (e.g. count ~ bin(N, detect)), it's possible for the resulting abundance (N which is lambda_ip in the code) to be lower than the sum of the removal captures. This isn't ideal because we have data with blocknets so tresspass should be essentially 0 and the population should be completely closed so we know that the abundances is at least the sum of the captures (c_sum). In our case, 6 of the ~25 sites have estimated abundances lower than c_sum.

Here's the results:

c_sum N_i problem pass_1 pass_2 pass_3 1 2 5.64692914 FALSE 2 0 0 2 19 16.59943823 TRUE 13 6 0 3 5 10.45871664 FALSE 4 1 0 4 4 5.87692710 FALSE 3 0 1 5 41 39.09965187 TRUE 36 5 0 6 60 67.33948789 FALSE 29 22 9 7 74 75.90026910 FALSE 54 16 4 8 29 30.81556512 FALSE 16 7 6 9 3 5.34360595 FALSE 2 1 0 10 21 20.97239012 TRUE 14 2 5 11 29 31.97828711 FALSE 16 5 8 12 1 3.13212761 FALSE 0 1 0 13 62 61.12344278 TRUE 48 9 5 14 20 21.74247106 FALSE 15 5 0 15 6 8.95806693 FALSE 4 2 0 16 23 23.47056684 FALSE 16 6 1 17 13 13.99877807 FALSE 9 3 1 18 16 16.59556384 FALSE 9 5 2 19 9 12.23351022 FALSE 4 1 4 20 7 3.77940218 TRUE 5 2 0 21 8 8.12085111 FALSE 7 1 0 22 1 1.00927762 FALSE 1 0 0 23 1 1.60131358 FALSE 0 1 0 24 1 0.03617079 TRUE 1 0 0

— Reply to this email directly or view it on GitHub https://github.com/djhocking/Trout_GRF/issues/2.

— Reply to this email directly or view it on GitHub< https://github.com/djhocking/Trout_GRF/issues/2#issuecomment-126772533>.

— Reply to this email directly or view it on GitHub https://github.com/djhocking/Trout_GRF/issues/2#issuecomment-126776984.

djhocking commented 9 years ago

Where are you finding version 1.4.0? I don’t get anything to install from CRAN and the github page says version 1.1: https://github.com/kaskr/adcomp/blob/master/TMB/DESCRIPTION


Daniel J. Hocking

dhocking@unh.edumailto:dhocking@unh.edu http://danieljhocking.wordpress.com/


On Jul 31, 2015, at 2:36 PM, Jim Thorson notifications@github.com<mailto:notifications@github.com> wrote:

Looks like v1.4.0 has it and is an official release. You could also list SE[ log(N) ] for each site and we could inspect by eye.

jim

On Fri, Jul 31, 2015 at 11:27 AM, Daniel J. Hocking < notifications@github.commailto:notifications@github.com> wrote:

bias.correct isn’t an option in the version of TMB library that I have (TMB_1_1 in R).


Daniel J. Hocking

http://danieljhocking.wordpress.com/


On Jul 31, 2015, at 2:03 PM, Jim Thorson notifications@github.com<mailto:notifications@github.com<mailto: notifications@github.commailto:notifications@github.com>> wrote:

So first I would try using the bias-correction tool for an unbiased estimate of N. Try running

SD = sdreport( obj, bias.correct=TRUE ) SD$unbiased$value

cheers, jim

On Fri, Jul 31, 2015 at 10:44 AM, Daniel J. Hocking < notifications@github.commailto:notifications@github.commailto:notifications@github.com> wrote:

When using the Poisson (jnll = -dnorm(count_ip, N_i*detect_ip)) rather than a binomial/multinomial (e.g. count ~ bin(N, detect)), it's possible for the resulting abundance (N which is lambda_ip in the code) to be lower than the sum of the removal captures. This isn't ideal because we have data with blocknets so tresspass should be essentially 0 and the population should be completely closed so we know that the abundances is at least the sum of the captures (c_sum). In our case, 6 of the ~25 sites have estimated abundances lower than c_sum.

Here's the results:

c_sum N_i problem pass_1 pass_2 pass_3 1 2 5.64692914 FALSE 2 0 0 2 19 16.59943823 TRUE 13 6 0 3 5 10.45871664 FALSE 4 1 0 4 4 5.87692710 FALSE 3 0 1 5 41 39.09965187 TRUE 36 5 0 6 60 67.33948789 FALSE 29 22 9 7 74 75.90026910 FALSE 54 16 4 8 29 30.81556512 FALSE 16 7 6 9 3 5.34360595 FALSE 2 1 0 10 21 20.97239012 TRUE 14 2 5 11 29 31.97828711 FALSE 16 5 8 12 1 3.13212761 FALSE 0 1 0 13 62 61.12344278 TRUE 48 9 5 14 20 21.74247106 FALSE 15 5 0 15 6 8.95806693 FALSE 4 2 0 16 23 23.47056684 FALSE 16 6 1 17 13 13.99877807 FALSE 9 3 1 18 16 16.59556384 FALSE 9 5 2 19 9 12.23351022 FALSE 4 1 4 20 7 3.77940218 TRUE 5 2 0 21 8 8.12085111 FALSE 7 1 0 22 1 1.00927762 FALSE 1 0 0 23 1 1.60131358 FALSE 0 1 0 24 1 0.03617079 TRUE 1 0 0

— Reply to this email directly or view it on GitHub https://github.com/djhocking/Trout_GRF/issues/2.

— Reply to this email directly or view it on GitHub< https://github.com/djhocking/Trout_GRF/issues/2#issuecomment-126772533>.

— Reply to this email directly or view it on GitHub https://github.com/djhocking/Trout_GRF/issues/2#issuecomment-126776984.

— Reply to this email directly or view it on GitHubhttps://github.com/djhocking/Trout_GRF/issues/2#issuecomment-126778944.

djhocking commented 9 years ago

Sorry, I’m also not sure how to get lambda_ip or log_lambda to show up in the Sdreport or where to get them from.


Daniel J. Hocking

http://danieljhocking.wordpress.com/


On Jul 31, 2015, at 2:03 PM, Jim Thorson notifications@github.com<mailto:notifications@github.com> wrote:

So first I would try using the bias-correction tool for an unbiased estimate of N. Try running

SD = sdreport( obj, bias.correct=TRUE ) SD$unbiased$value

cheers, jim

On Fri, Jul 31, 2015 at 10:44 AM, Daniel J. Hocking < notifications@github.commailto:notifications@github.com> wrote:

When using the Poisson (jnll = -dnorm(count_ip, N_i*detect_ip)) rather than a binomial/multinomial (e.g. count ~ bin(N, detect)), it's possible for the resulting abundance (N which is lambda_ip in the code) to be lower than the sum of the removal captures. This isn't ideal because we have data with blocknets so tresspass should be essentially 0 and the population should be completely closed so we know that the abundances is at least the sum of the captures (c_sum). In our case, 6 of the ~25 sites have estimated abundances lower than c_sum.

Here's the results:

c_sum N_i problem pass_1 pass_2 pass_3 1 2 5.64692914 FALSE 2 0 0 2 19 16.59943823 TRUE 13 6 0 3 5 10.45871664 FALSE 4 1 0 4 4 5.87692710 FALSE 3 0 1 5 41 39.09965187 TRUE 36 5 0 6 60 67.33948789 FALSE 29 22 9 7 74 75.90026910 FALSE 54 16 4 8 29 30.81556512 FALSE 16 7 6 9 3 5.34360595 FALSE 2 1 0 10 21 20.97239012 TRUE 14 2 5 11 29 31.97828711 FALSE 16 5 8 12 1 3.13212761 FALSE 0 1 0 13 62 61.12344278 TRUE 48 9 5 14 20 21.74247106 FALSE 15 5 0 15 6 8.95806693 FALSE 4 2 0 16 23 23.47056684 FALSE 16 6 1 17 13 13.99877807 FALSE 9 3 1 18 16 16.59556384 FALSE 9 5 2 19 9 12.23351022 FALSE 4 1 4 20 7 3.77940218 TRUE 5 2 0 21 8 8.12085111 FALSE 7 1 0 22 1 1.00927762 FALSE 1 0 0 23 1 1.60131358 FALSE 0 1 0 24 1 0.03617079 TRUE 1 0 0

— Reply to this email directly or view it on GitHub https://github.com/djhocking/Trout_GRF/issues/2.

— Reply to this email directly or view it on GitHubhttps://github.com/djhocking/Trout_GRF/issues/2#issuecomment-126772533.

James-Thorson commented 9 years ago

Check

https://github.com/kaskr/adcomp/releases

Sent from my phone

On Jul 31, 2015, at 12:07 PM, Daniel J. Hocking notifications@github.com wrote:

Where are you finding version 1.4.0? I don’t get anything to install from CRAN and the github page says version 1.1: https://github.com/kaskr/adcomp/blob/master/TMB/DESCRIPTION


Daniel J. Hocking

dhocking@unh.edumailto:dhocking@unh.edu http://danieljhocking.wordpress.com/


On Jul 31, 2015, at 2:36 PM, Jim Thorson notifications@github.com<mailto:notifications@github.com> wrote:

Looks like v1.4.0 has it and is an official release. You could also list SE[ log(N) ] for each site and we could inspect by eye.

jim

On Fri, Jul 31, 2015 at 11:27 AM, Daniel J. Hocking < notifications@github.commailto:notifications@github.com> wrote:

bias.correct isn’t an option in the version of TMB library that I have (TMB_1_1 in R).


Daniel J. Hocking

http://danieljhocking.wordpress.com/


On Jul 31, 2015, at 2:03 PM, Jim Thorson notifications@github.com<mailto:notifications@github.com<mailto: notifications@github.commailto:notifications@github.com>> wrote:

So first I would try using the bias-correction tool for an unbiased estimate of N. Try running

SD = sdreport( obj, bias.correct=TRUE ) SD$unbiased$value

cheers, jim

On Fri, Jul 31, 2015 at 10:44 AM, Daniel J. Hocking < notifications@github.commailto:notifications@github.commailto:notifications@github.com> wrote:

When using the Poisson (jnll = -dnorm(count_ip, N_i*detect_ip)) rather than a binomial/multinomial (e.g. count ~ bin(N, detect)), it's possible for the resulting abundance (N which is lambda_ip in the code) to be lower than the sum of the removal captures. This isn't ideal because we have data with blocknets so tresspass should be essentially 0 and the population should be completely closed so we know that the abundances is at least the sum of the captures (c_sum). In our case, 6 of the ~25 sites have estimated abundances lower than c_sum.

Here's the results:

c_sum N_i problem pass_1 pass_2 pass_3 1 2 5.64692914 FALSE 2 0 0 2 19 16.59943823 TRUE 13 6 0 3 5 10.45871664 FALSE 4 1 0 4 4 5.87692710 FALSE 3 0 1 5 41 39.09965187 TRUE 36 5 0 6 60 67.33948789 FALSE 29 22 9 7 74 75.90026910 FALSE 54 16 4 8 29 30.81556512 FALSE 16 7 6 9 3 5.34360595 FALSE 2 1 0 10 21 20.97239012 TRUE 14 2 5 11 29 31.97828711 FALSE 16 5 8 12 1 3.13212761 FALSE 0 1 0 13 62 61.12344278 TRUE 48 9 5 14 20 21.74247106 FALSE 15 5 0 15 6 8.95806693 FALSE 4 2 0 16 23 23.47056684 FALSE 16 6 1 17 13 13.99877807 FALSE 9 3 1 18 16 16.59556384 FALSE 9 5 2 19 9 12.23351022 FALSE 4 1 4 20 7 3.77940218 TRUE 5 2 0 21 8 8.12085111 FALSE 7 1 0 22 1 1.00927762 FALSE 1 0 0 23 1 1.60131358 FALSE 0 1 0 24 1 0.03617079 TRUE 1 0 0

— Reply to this email directly or view it on GitHub https://github.com/djhocking/Trout_GRF/issues/2.

— Reply to this email directly or view it on GitHub< https://github.com/djhocking/Trout_GRF/issues/2#issuecomment-126772533>.

— Reply to this email directly or view it on GitHub https://github.com/djhocking/Trout_GRF/issues/2#issuecomment-126776984.

— Reply to this email directly or view it on GitHubhttps://github.com/djhocking/Trout_GRF/issues/2#issuecomment-126778944.

— Reply to this email directly or view it on GitHub.

James-Thorson commented 9 years ago

At the end of the CPP put

ADREPORT( lambda_ip )

And then calling

SD = sdreport( obj )

In R will cause SD to include standard errors for lambda_ip

Sent from my phone

On Jul 31, 2015, at 12:16 PM, Daniel J. Hocking notifications@github.com wrote:

Sorry, I’m also not sure how to get lambda_ip or log_lambda to show up in the Sdreport or where to get them from.


Daniel J. Hocking

http://danieljhocking.wordpress.com/


On Jul 31, 2015, at 2:03 PM, Jim Thorson notifications@github.com<mailto:notifications@github.com> wrote:

So first I would try using the bias-correction tool for an unbiased estimate of N. Try running

SD = sdreport( obj, bias.correct=TRUE ) SD$unbiased$value

cheers, jim

On Fri, Jul 31, 2015 at 10:44 AM, Daniel J. Hocking < notifications@github.commailto:notifications@github.com> wrote:

When using the Poisson (jnll = -dnorm(count_ip, N_i*detect_ip)) rather than a binomial/multinomial (e.g. count ~ bin(N, detect)), it's possible for the resulting abundance (N which is lambda_ip in the code) to be lower than the sum of the removal captures. This isn't ideal because we have data with blocknets so tresspass should be essentially 0 and the population should be completely closed so we know that the abundances is at least the sum of the captures (c_sum). In our case, 6 of the ~25 sites have estimated abundances lower than c_sum.

Here's the results:

c_sum N_i problem pass_1 pass_2 pass_3 1 2 5.64692914 FALSE 2 0 0 2 19 16.59943823 TRUE 13 6 0 3 5 10.45871664 FALSE 4 1 0 4 4 5.87692710 FALSE 3 0 1 5 41 39.09965187 TRUE 36 5 0 6 60 67.33948789 FALSE 29 22 9 7 74 75.90026910 FALSE 54 16 4 8 29 30.81556512 FALSE 16 7 6 9 3 5.34360595 FALSE 2 1 0 10 21 20.97239012 TRUE 14 2 5 11 29 31.97828711 FALSE 16 5 8 12 1 3.13212761 FALSE 0 1 0 13 62 61.12344278 TRUE 48 9 5 14 20 21.74247106 FALSE 15 5 0 15 6 8.95806693 FALSE 4 2 0 16 23 23.47056684 FALSE 16 6 1 17 13 13.99877807 FALSE 9 3 1 18 16 16.59556384 FALSE 9 5 2 19 9 12.23351022 FALSE 4 1 4 20 7 3.77940218 TRUE 5 2 0 21 8 8.12085111 FALSE 7 1 0 22 1 1.00927762 FALSE 1 0 0 23 1 1.60131358 FALSE 0 1 0 24 1 0.03617079 TRUE 1 0 0

— Reply to this email directly or view it on GitHub https://github.com/djhocking/Trout_GRF/issues/2.

— Reply to this email directly or view it on GitHubhttps://github.com/djhocking/Trout_GRF/issues/2#issuecomment-126772533.

— Reply to this email directly or view it on GitHub.

djhocking commented 9 years ago

I got version 1.4 to install although it still says version 1.1 in the documentation and sessionInfo() but it has the basis.correction now. Unfortunately, the unbiased estimates are very slightly different so there's still the same issue.

   c_sum        N_i    N_unbias       N_sd problem pass_1 pass_2 pass_3
1      2  5.6469292  5.87162583 2.19298061   FALSE      2      0      0
2     19 16.5994382 16.79989210 3.78024117    TRUE     13      6      0
3      5 10.4587167 10.79229583 3.38743776   FALSE      4      1      0
4      4  5.8769271  6.06846185 1.86439378   FALSE      3      0      1
5     41 39.0996518 39.20575963 6.21430365    TRUE     36      5      0
6     60 67.3394876 68.48559645 9.94258144   FALSE     29     22      9
7     74 75.9002692 76.37320369 8.90772487   FALSE     54     16      4
8     29 30.8155650 31.48318978 6.04574853   FALSE     16      7      6
9      3  5.3436058  5.51974701 2.24250630   FALSE      2      1      0
10    21 20.9723899 21.39876245 4.80685493    TRUE     14      2      5
11    29 31.9782870 32.73815055 6.49159468   FALSE     16      5      8
12     1  3.1321277  3.27771064 1.51858361   FALSE      0      1      0
13    62 61.1234427 61.42603711 7.88556533    TRUE     48      9      5
14    20 21.7424710 22.02205158 4.64236673   FALSE     15      5      0
15     6  8.9580670  9.26361172 2.83323563   FALSE      4      2      0
16    23 23.4705668 23.77956884 4.91065861   FALSE     16      6      1
17    13 13.9987782 14.31186194 3.76193189   FALSE      9      3      1
18    16 16.5955638 16.97258515 3.72675340   FALSE      9      5      2
19     9 12.2335102 12.61549269 3.21343877   FALSE      4      1      4
20     7  3.7794022  3.89420944 1.62002112    TRUE      5      2      0
21     8  8.1208510  8.30648900 2.89806700   FALSE      7      1      0
22     1  1.0092777  1.07598052 1.02537210   FALSE      1      0      0
23     1  1.6013136  1.72156252 1.13456841   FALSE      0      1      0
24     1  0.0361708  0.04084859 0.05999582    TRUE      1      0      0

P.S. here I'm referring to lambda_ip as N

James-Thorson commented 9 years ago

Great thanks for checking! I'll add a multinomial option tonight or tomorrow.

Sent from my phone

On Jul 31, 2015, at 1:15 PM, Daniel J. Hocking notifications@github.com wrote:

I got version 1.4 to install although it still says version 1.1 in the documentation and sessionInfo() but it has the basis.correction now. Unfortunately, the unbiased estimates are very slightly different so there's still the same issue.

c_sum N_i N_unbias N_sd problem pass_1 pass_2 pass_3 1 2 5.6469292 5.87162583 2.19298061 FALSE 2 0 0 2 19 16.5994382 16.79989210 3.78024117 TRUE 13 6 0 3 5 10.4587167 10.79229583 3.38743776 FALSE 4 1 0 4 4 5.8769271 6.06846185 1.86439378 FALSE 3 0 1 5 41 39.0996518 39.20575963 6.21430365 TRUE 36 5 0 6 60 67.3394876 68.48559645 9.94258144 FALSE 29 22 9 7 74 75.9002692 76.37320369 8.90772487 FALSE 54 16 4 8 29 30.8155650 31.48318978 6.04574853 FALSE 16 7 6 9 3 5.3436058 5.51974701 2.24250630 FALSE 2 1 0 10 21 20.9723899 21.39876245 4.80685493 TRUE 14 2 5 11 29 31.9782870 32.73815055 6.49159468 FALSE 16 5 8 12 1 3.1321277 3.27771064 1.51858361 FALSE 0 1 0 13 62 61.1234427 61.42603711 7.88556533 TRUE 48 9 5 14 20 21.7424710 22.02205158 4.64236673 FALSE 15 5 0 15 6 8.9580670 9.26361172 2.83323563 FALSE 4 2 0 16 23 23.4705668 23.77956884 4.91065861 FALSE 16 6 1 17 13 13.9987782 14.31186194 3.76193189 FALSE 9 3 1 18 16 16.5955638 16.97258515 3.72675340 FALSE 9 5 2 19 9 12.2335102 12.61549269 3.21343877 FALSE 4 1 4 20 7 3.7794022 3.89420944 1.62002112 TRUE 5 2 0 21 8 8.1208510 8.30648900 2.89806700 FALSE 7 1 0 22 1 1.0092777 1.07598052 1.02537210 FALSE 1 0 0 23 1 1.6013136 1.72156252 1.13456841 FALSE 0 1 0 24 1 0.0361708 0.04084859 0.05999582 TRUE 1 0 0 — Reply to this email directly or view it on GitHub.

James-Thorson commented 9 years ago

Ok there's at least one computationally efficient solution to ensure that it interprets samples via finite-sampling techniques. But let's chat on Monday first about goals and our game plan. That sound OK?

Sent from my phone

On Jul 31, 2015, at 1:15 PM, Daniel J. Hocking notifications@github.com wrote:

I got version 1.4 to install although it still says version 1.1 in the documentation and sessionInfo() but it has the basis.correction now. Unfortunately, the unbiased estimates are very slightly different so there's still the same issue.

c_sum N_i N_unbias N_sd problem pass_1 pass_2 pass_3 1 2 5.6469292 5.87162583 2.19298061 FALSE 2 0 0 2 19 16.5994382 16.79989210 3.78024117 TRUE 13 6 0 3 5 10.4587167 10.79229583 3.38743776 FALSE 4 1 0 4 4 5.8769271 6.06846185 1.86439378 FALSE 3 0 1 5 41 39.0996518 39.20575963 6.21430365 TRUE 36 5 0 6 60 67.3394876 68.48559645 9.94258144 FALSE 29 22 9 7 74 75.9002692 76.37320369 8.90772487 FALSE 54 16 4 8 29 30.8155650 31.48318978 6.04574853 FALSE 16 7 6 9 3 5.3436058 5.51974701 2.24250630 FALSE 2 1 0 10 21 20.9723899 21.39876245 4.80685493 TRUE 14 2 5 11 29 31.9782870 32.73815055 6.49159468 FALSE 16 5 8 12 1 3.1321277 3.27771064 1.51858361 FALSE 0 1 0 13 62 61.1234427 61.42603711 7.88556533 TRUE 48 9 5 14 20 21.7424710 22.02205158 4.64236673 FALSE 15 5 0 15 6 8.9580670 9.26361172 2.83323563 FALSE 4 2 0 16 23 23.4705668 23.77956884 4.91065861 FALSE 16 6 1 17 13 13.9987782 14.31186194 3.76193189 FALSE 9 3 1 18 16 16.5955638 16.97258515 3.72675340 FALSE 9 5 2 19 9 12.2335102 12.61549269 3.21343877 FALSE 4 1 4 20 7 3.7794022 3.89420944 1.62002112 TRUE 5 2 0 21 8 8.1208510 8.30648900 2.89806700 FALSE 7 1 0 22 1 1.0092777 1.07598052 1.02537210 FALSE 1 0 0 23 1 1.6013136 1.72156252 1.13456841 FALSE 0 1 0 24 1 0.0361708 0.04084859 0.05999582 TRUE 1 0 0 — Reply to this email directly or view it on GitHub.

djhocking commented 9 years ago

Sounds good. Talk to you on Monday.

Sent from my iPhone

On Aug 1, 2015, at 1:33 AM, Jim Thorson notifications@github.com<mailto:notifications@github.com> wrote:

Ok there's at least one computationally efficient solution to ensure that it interprets samples via finite-sampling techniques. But let's chat on Monday first about goals and our game plan. That sound OK?

Sent from my phone

On Jul 31, 2015, at 1:15 PM, Daniel J. Hocking notifications@github.com<mailto:notifications@github.com> wrote:

I got version 1.4 to install although it still says version 1.1 in the documentation and sessionInfo() but it has the basis.correction now. Unfortunately, the unbiased estimates are very slightly different so there's still the same issue.

c_sum N_i N_unbias N_sd problem pass_1 pass_2 pass_3 1 2 5.6469292 5.87162583 2.19298061 FALSE 2 0 0 2 19 16.5994382 16.79989210 3.78024117 TRUE 13 6 0 3 5 10.4587167 10.79229583 3.38743776 FALSE 4 1 0 4 4 5.8769271 6.06846185 1.86439378 FALSE 3 0 1 5 41 39.0996518 39.20575963 6.21430365 TRUE 36 5 0 6 60 67.3394876 68.48559645 9.94258144 FALSE 29 22 9 7 74 75.9002692 76.37320369 8.90772487 FALSE 54 16 4 8 29 30.8155650 31.48318978 6.04574853 FALSE 16 7 6 9 3 5.3436058 5.51974701 2.24250630 FALSE 2 1 0 10 21 20.9723899 21.39876245 4.80685493 TRUE 14 2 5 11 29 31.9782870 32.73815055 6.49159468 FALSE 16 5 8 12 1 3.1321277 3.27771064 1.51858361 FALSE 0 1 0 13 62 61.1234427 61.42603711 7.88556533 TRUE 48 9 5 14 20 21.7424710 22.02205158 4.64236673 FALSE 15 5 0 15 6 8.9580670 9.26361172 2.83323563 FALSE 4 2 0 16 23 23.4705668 23.77956884 4.91065861 FALSE 16 6 1 17 13 13.9987782 14.31186194 3.76193189 FALSE 9 3 1 18 16 16.5955638 16.97258515 3.72675340 FALSE 9 5 2 19 9 12.2335102 12.61549269 3.21343877 FALSE 4 1 4 20 7 3.7794022 3.89420944 1.62002112 TRUE 5 2 0 21 8 8.1208510 8.30648900 2.89806700 FALSE 7 1 0 22 1 1.0092777 1.07598052 1.02537210 FALSE 1 0 0 23 1 1.6013136 1.72156252 1.13456841 FALSE 0 1 0

24 1 0.0361708 0.04084859 0.05999582 TRUE 1 0 0

Reply to this email directly or view it on GitHub.

Reply to this email directly or view it on GitHubhttps://github.com/djhocking/Trout_GRF/issues/2#issuecomment-126867361.