Closed Generalized closed 1 year ago
Could you please read this thread and try the development version of marginaleffects
from GitHub?
This sounds like a very similar issue:
https://github.com/vincentarelbundock/marginaleffects/issues/840#issuecomment-1634156285
The last comment in that thread shows how to use the new argument in the development version: https://github.com/vincentarelbundock/marginaleffects/issues/840#issuecomment-1637165327
Thank you very much for so quick response! The problem is solved. You nailed it perfectly!
> avg_slopes(mod, newdata = org_data_glm, conf_level = 0.9, numderiv = list("fdcenter", eps = 1e-10))$conf.low
[1] -0.04781518
> avg_slopes(mod, newdata = org_data_glm, conf_level = 0.9, numderiv = list("fdcenter", eps = 1e-10))$std.error
[1] 0.01801581
which is now very close margins and the "classic" procedure!
options(scipen = 99999)
# L_CI
> avg_slopes(mod, newdata = org_data_glm, conf_level = 0.9, numderiv = list("fdcenter", eps = 1e-10))$conf.low - wald2ci(0, 56, 1, 55, conf.level = 0.9, adjust="Wald")$conf.int[1]
[1] -0.00000005986086
making a difference with respect to Wald's: 0.00013%
# SE
> avg_slopes(mod, newdata = org_data_glm, conf_level = 0.9, numderiv = list("fdcenter", eps = 1e-10))$std.error - sqrt( (0 * (1 - 0))/56 + ((1/55) * (1 - (1/55)))/55)
[1] 0.00000003665455
which makes 0.0002% w.r.t. Wald's.
I'm sorry if asking for an obvious thing, but is this stated somewhere in the documentation? If not, I think it may be worth adding it. More people from the pharmaceutical industry may search or this issue.
Once again - huge thank you!
Great! Let's leave this issue open until the next release (when CRAN comes back from vacation in a couple weeks).
Version 0.14.0 was just submitted to CRAN with the better step size selection for numerical derivatives.
Dear Authors,
With the following data:
I first run the prop.test() to calculate the 90% 2-sided CI for the difference in % of successes:
Let's note the lower bound of the L_CI = -0.04781512
Now, we will reproduce this with the classic 2-sample Wald's "z" procedure:
OK, these methods use the same formulas, and L_CI = -0.04781512
Now, with the average marginal effect over the logistic regression = it's exact equivalent. First, let's try the margins package (reproducing Stata)
Good! The d L_CI = -0.04781512
Now with marginaleffects:
Looks almost identical, but:
While the SE from the classic non-pooled Wald's z procedure is:
and this agrees with the margins:
So the difference comes from the standard error. I guess this comes from the differences in calculating the var-cov via delta method?
I know this difference is microscopic, but I work in controlled environment and I'm obliged to know at least the source of a discrepancy. It was caught by an independent validator and reported to me, so now I need to explain the potential causes.
I use version ‘0.13.0’ from CRAN.
Session info: