Closed BrianLang closed 1 year ago
Hey, completely agree. Maybe it would be best to discuss that in a vignette? As in, "here is an example, that's why id does not work in lme4 and here is how you do it in mmrm."
@kkmann maybe you want to take this on :-) ?
@BrianLang @kkmann So I had a chat with Mike Stackhouse from the PhUSE working group on mixed models on Friday, and they actually put something very similar together already here: https://phuse-org.github.io/CSRMLW_bookdown/mixed-models.html
I would suggest that we add to that, i.e. make PR(s) towards https://github.com/phuse-org/CSRMLW_bookdown/blob/main/ch_mixed-models.Rmd and thereby integrate mmrm
as well as glmmTMB
. We can also add topics of our interest, e.g. degrees of freedom calculation, speed, etc.
Cool, yes, that would add visibility. Doesn't hurt to have a vignette on the topic under our direct control either though.
Sure but I would not want to duplicate anything there... we can e.g. just reference this from the introduction
Do you think we'd be able to also do some re-structuring of their content in the mixed-models section? It's not immediately obvious to me how we could put our desired content into their current outline and a book-like structure. We could align with @mstackhouse ahead of time to ask about this.
Can we think of a way to retain the information in our repo, but perhaps not host it on our site? Simply to make sure that it remains easy to preserve, rerun, and update in, as @kkmann has suggested, a location under our direct control?
I guess there is an opportunity to do two things effectively
If we add to phuse, directly we get more visibility and solve 2. However, https://phuse-org.github.io/CSRMLW_bookdown/mixed-models.html is not detailed enough to even explain why mmrm is necessary in the first place. So before adding there, we should clarify with the maintainer whether it would be ok for us to maintain this page and expand it substantially. I guess there should be 1) a quick intro, also making it clear that MMRM are MLMRM (linear) for now; what the difference between random slope intercept and "typical" MMRM are, ... (-> might also be a great basis if we think about a clasical article at some point) 2) capabilities (current and planned) for different packages 3) common examples that work in all packages 3) cases that only work in specific packages.
I am happy to contribute to this, also as an mmrm learning experience for me. I still find the terminology quite confusing from time to time (rename to mixr ;) ?). Maybe @chstock would be interested as well.
@danielinteractive what is currently the most comprehensive list of mmrm capabilities (both implemented and planned). I would probably keep track of that somewhere in roxygen or even the README.md (-> shows in pkfdown) to avoid maintaining anything extra.
Thanks @kkmann - yeah that is what I confirmed with Mike Stackhouse from the PhUSE wg in a meeting, he is open to let us change this section and focus then also on the new mmrm
package.
Capabilities of this package as an overview can be added to README (short) and to introduction vignette (long).
It would be cool if you could drive this, thanks in advance!
x) the truth it - I would need that overview for myself to get sorted. I am still struggling with infering it from the code. So, happy to join the long-form doc, but I am just lacking the info atm.
@kkmann ok let's talk about in our chat this afternoon, that will be easiest :-)
I am happy to contribute to this sort of vignette too. I would find a worked example with a little bit of textbook-like background helpful, then contrast the mmrm package implementation with other implementations (in R, as far as possible) and perhaps a random-slope and intercept and/or gee model. I assume this will be a new vignette, right? I hope I have not missed anything, you may have discussed this further in the meantime...
Thanks @chstock that is great. We aligned that we want to contribute/modify/extend phuse-org.github.io/CSRMLW_bookdown/mixed-models.html
ah, ok. thanks!
@chstock, would be glad to work with you on this to imagine a re-working of that mixed-models page into something that more effectively demonstrates the strengths and weaknesses of the methods. Probably something involving some benchmarking and statistics rather than inundating the reader with figures. What do you think?
@danielinteractive, @chstock and I have met and are interested in moving this forward.
As a first step we'd be interested in iterating it here in the MMRM github as in-house materials and then if it still seems appropriate, providing a PR to the phuse mixed models chapter.
Thanks @chstock and @BrianLang , ok so let's do that first as a vignette here, but then still aim for moving it to the other repo eventually - as we will have benefits from that and overall avoid duplication in the community.
+1 for inhouse first. Happy to chip in once the PR is open :)
One of the first things my group did was compare microbenchmark compared to {nlme::gls}
, this was a great motivator for us to use it
this was on winos, probably a table of os vs pkg would be really useful for potential users
library(nlme)
library(mmrm)
mb <- microbenchmark(
fit_gls = gls(FEV1 ~ RACE + ARMCD * AVISIT, data = fev_data,
na.action = na.omit,
correlation = corSymm(form = ~ VISITN | USUBJID),
weights = varIdent(form = ~ 1|VISITN)),
fit_mmrm = mmrm(
formula = FEV1 ~ RACE + ARMCD * AVISIT + us(AVISIT | USUBJID),
data = fev_data
)
)
mb
# Unit: milliseconds
# expr min lq mean median uq max neval
# fit_gls 813.5879 860.9534 943.8092 886.5561 938.9165 3160.3840 100
# fit_mmrm 92.1933 102.4918 118.5049 109.4956 123.2475 387.3922 100
autoplot(mb)
Super, will be glad to have your insight to pull this together.
Great to see this - 10x isn't nothing :) especially when it comes to simulations...
I still feel that the key info we need to provide is the capability comparison with other approaches/packages. Even if nlme is 10x slower people might prefer it if it is an established core package. If fits are unstable in some instances, we should have an example for that etc.
Hi @BrianLang any help needed to progress with this?
Hi @PhilBoileau could you start working on this already? Would be good to prioritize this, thanks!
Hi @danielinteractive, already have. I'm just waiting for my simulation study to finish running before compiling the results. It'll take another two weeks; sasr
can't be parallelized, so generating the PROC MIXED results takes a while. The code is available in the missing-data-benchmarks branch if you'd like to check it out.
Super, thanks a lot @PhilBoileau !
For marketing/usability, we should provide users with a description as to how/why
mmrm
would be more appropriate than the other available packages.In particular:
lme4
,nlme
, andglmmTMB
.This documentation could fit well in a separate vignette "Comparison with other packages", so that potential users can immediately understand where/when we think our development can help them compared to packages with higher levels of adoption.