Closed huftis closed 1 year ago
@mattwarkentin ? Is this a distinction between one-sided and two-sided tests? At least whatever flexsurv does here should be documented.
@mattwarkentin ? Is this a distinction between one-sided and two-sided tests? At least whatever flexsurv does here should be documented.
The code (https://github.com/chjackson/flexsurv-dev/blob/master/R/broom-funs.R#L67) currently says:
pvals <- pnorm(abs(stats), lower.tail = FALSE)
Note the abs()
function. So it’s not a one-sided test, just a buggy two-sided test. The output of pnorm(...)
should be multiplied by 2.
Seems to just be a bug. Good catch, @huftis. I have pushed a PR with a fix and added a test to guard against such issues in the future.
See #160
The P-values from
flexsurvreg::tidy()
are wrong; they are always half the value of the correct P-values. Example:The P-value, 0.21, should be twice as large, 0.42. This can be checked by comparison of the results from the same model fitted using the
survival::survreg()
function:(
broom::tidy(l_sr)
also returns the P-value 0.42.)I have checked many examples, and the P-values are always half as large as the P-values from survreg. In theory, it might of course be survreg that was in the wrong, but using simulations from null models, I have also checked that
flexsurvreg::tidy()
only returns P-values in the range 0–.5, not 0–1.