We're currently using stats::binom.test to generate uncertainty in estimates (i.e. 95% CI using Clopper-Pearson method). For most use cases the slowdown will be limited, but this could be made much faster for longer time series estimation (e.g. cfr_time_varying()).
For example, we could switch to Wilson method:
binom_wilson <- function(x, n, conf.level = 0.95) {
p_hat <- x / n
z <- qnorm(1 - (1 - conf.level) / 2)
denom <- 1 + z^2 / n
center <- (p_hat + z^2 / (2 * n)) / denom
half_width <- z * sqrt((p_hat * (1 - p_hat) + z^2 / (4 * n)) / n) / denom
lower <- center - half_width
upper <- center + half_width
# Ensure boundaries within [0,1]
lower <- max(0, lower)
upper <- min(1, upper)
list(estimate = p_hat, conf.int = c(lower, upper))
}
# Example usage
binom_wilson(50, 100)
We're currently using
stats::binom.test
to generate uncertainty in estimates (i.e. 95% CI using Clopper-Pearson method). For most use cases the slowdown will be limited, but this could be made much faster for longer time series estimation (e.g.cfr_time_varying()
).For example, we could switch to Wilson method: