Closed bmoallemi closed 4 years ago
No. See wikipedia.
Ah thanks for clarifying!
Following the wiki (and the references cited therein), shouldn't we replace est
in the formula above with the average of the jackknife estimates, i.e., mean(est_jk)
?
Also, do you have any thoughts on applying the jackknife bias reduction?
Following the wiki (and the references cited therein), shouldn't we replace
est
in the formula above with the average of the jackknife estimates, i.e.,mean(est_jk)
?
Our theoretical results don't justify one over the other. Use of the jackknife estimate of variance is justified by asymptotic linearity (Equation 4.8 in the paper). And asymptotic linearity also implies the asymptotic equivalence of using est
and mean(est_jk)
--- if the op(1) term in Equation 4.8 were not there, they would be identical.
Also, do you have any thoughts on applying the jackknife bias reduction?
I haven't had a chance to look into jackknife bias reduction. If you're interested in the topic, it might be worth doing some simulation. I'd be happy to meet and talk about it. Send me email if you'd like to set something up.
You have:
V.hat <- (N - 1) * mean((est_jk - est)^2)
.Shouldn't we be doing something like:
V.hat <- (N / (N - 1)) * mean((est_jk - est)^2)
,or:
V.hat <- (1 / (N - 1)) * sum((est_jk - est)^2)
?