Closed Generalized closed 11 months ago
Nope. The same method is used either way. This is seen in just a few lines of code near the beginning of joint_tests()
if (!inherits(object, "emmGrid")) {
args = .zap.args(object = object, cov.reduce = cov.reduce, ..., omit = "submodel")
object = do.call(ref_grid, args)
}
...
That is, if object
is a model, we replace it with its reference grid; then the joint tests are computed from that. So the extra time is just the time required to compute the reference grid. You get different results in this comparison because of your regrid = "respoinse"
argument in the emmeans()
call, which puts things on a different scale.
Dear @rvlenth ,
I found a little discrepancy and searching for an explanation on what causes it:
Quick information: It's about comparing % of successes in some experimental research using the logistic regression via GEE-GLM.
Data is the MICE-imputed dataset with 20 instances. Arm is an unordered factor, Visit_nOrd is an unordered factor.
The model:
Now let's do per-visit comparisons between the treatment arms. Nothing weird here - just comparing two arms, time-by-time
And now - just out of curiosity - joint_tests:
On the multiply repeated model itself:
This takes a longer time, so I guess it iterates through the imputed instances of the dataset, does the joint_test and then pools the results?
On the emmeans:
And this is immediate, so I guess it uses the already "processed" LS-means and does the quick calculation.
Now I'm wondering if I guessed correctly which one to choose :)
PS: It will be hard to provide data - it's a long dataset and secret, so - if needed - I will need to anonymize the data and provide as a long-stacked data.frame + the code to revert it back to MIDS.