Closed xhdong-umd closed 6 years ago
Yes, as you thin the data the effective sample size of speed (which I don't have printed out in summary()
yet) goes to zero, and the confidence intervals widen out and the point estimate becomes upwards biased. The confidence intervals remain pretty good, though. As you can see here, the lower confidence interval has bottomed out at zero, while the upper confidence interval is 200x the point estimate. The effective sample size for speed here is probably less than 1. That estimate will be going into summary()
very soon.
I'll leave this issue open until I update summary()
.
OK. Once summary
updated I need to check if all my model summary code need to be adjusted.
We can also note this in app help for using sampled data in app test so user will not think it as a bug.
summary.ctmm()
now reports a DOF[speed].
On another note, ctmm.select()
is now automating some multi-stage fitting, which should help when fitting with errors.
This is the DOF speed values for my test data:
> summary_dt$`DOF speed`
[1] 0.000000e+00 4.481233e-02 0.000000e+00 2.425004e-08 0.000000e+00 8.340052e+00 6.218175e+01
[8] 0.000000e+00 7.294972e+00 0.000000e+00 0.000000e+00 1.251285e+01 7.169899e+00
Usually I will round up the numbers in the model summary table to 3 digits after decimal point. If I round these values too, some smaller value may look like to be just 0. Is that OK?
Rounding to zero is fine. These numbers are going to be compared to the number of sampled locations, so even rounding to the nearest tenth would be acceptable.
I found sometimes the model fitting result can have very large speed values. Is this normal?
I only tested the sample data instead of full data, it could be the small sample make the fit more difficult.