Open strengejacke opened 5 years ago
I guess that would make sense as models' performance indices are very often used to compare models... I wonder about the syntax tho, do we need something implicit like r2(..., named_args)
with wich which we could do r2(model1, model2, model3, named_arg=FALSE)
or is it better to leave the current behaviourand accept lists of models as the first argument r2(c(model1, model2, model3), named_arg=FALSE)
.
This could be later extended to model_performance()
(or a new compare_performance
? that would open a new types of functions compare_*
), that would compute and compare or possibles indices (i.e., all indices that are compatible with all the models)
Currently, r2()
is defined as r2 <- function(model, ...) {
. I would say we just make it r2(...)
(or probably r2(x, ...)
) and capture the models with list(...)
or so inside the function.
Agreed.
For the sake of flexibility, we might want to check the provided arguments, to see if they are (compatible) models (i.e. statistical models). Maybe we could add a small is_model
in insight, and run this check on the provided arguments in r2(...)
(models_to_compare <- allargsinellipsis[is_model(allargsinellipsis)]
to increase stability?
Sounds good! is_model()
would be a long list of inherits()
-commands, I guess ;-)
For comparison, should we also check if all models are of the same type? (i.e. no mixing from lm, glm, coxph etc.)
I think we should check the input with all_models_equal() because it makes no sense to compare r-squared values from different distributional families.
Might be something for later than initial release... It requires some work esp. for more complex R2 measures like Bayes or Nakagawa.
Agree, this can be improved later on
I suggest implementing this in compare_performance()
as R2_delta
for linear models only (the only ones for which this really makes sense (As for GLMs the total "variance" on the latent scale increases with model complexity... which is weird...).
We might then also add Cohens_f2
:
The R2 diff could nicely fit in test_performance especially if there are any CI/significance that we could derive from it 😏
I wonder if, in general, we should have a difference_performance()
utility function or a difference=TRUE
arg in compare_performance()
that just displays the difference instead of the raw indices? (which basically sapply(compare_performance, diff)
)
we should have a difference_performance()
No.
or a difference=TRUE arg in compare_performance() that just displays the difference instead of the raw indices?
Difference to what? Models as entered in their order? I'm not sure this is informative for most indices, is it?
Olkin, Alf, and Graf have each variously developed CIs of various flavors for R2 differences.
I was teaching yesterday and I was called out by students "You don't have delta-R2 available??" This is embarrassing guys....
Haha. Tell your students that it's honestly not a clear problem to solve when you aren't incorporating the incremental validity into your model (ala some specific SEM models) or via bootstrapping 😜
We should definitely have some difference-related capabilities, and R2 seems like the best place to start
∆R^2^ and ∆R and √∆R are all good statistics to that end. As a start, bootstrapping would be a great method for intervals/p. (Honestly, they are often the best estimators compared to delta-method; proper analytic are a pain).
Is this a valid or useful measure at all? https://twitter.com/brodriguesco/status/1461604815759417344?s=20
That tweet is just referencing the distinction between R2 and adjusted/cross-validated R2
cross-validated R2?
Out of sample R2 (either actually computed in a hold out sample or via leave-one-out or via an analytic approximation)
not sure why this is out of sample/cross validated, since predictors are added, but no additional / different data sets?
I mean that his tweet is lamenting that in-sample R2 is positively biased. It is absolutely meaningful to compare models on R2--the solution to his concern is that an unbiased R2 estimator should be used.
Ah, ok. Was a bit confused, because we were not discussing cross validated R2 here.
Saw a recent tweet where @bwiernik mentioned R2 differences, I'd suggest implementing it first in a function called test_r2()
, and then perhaps incorporated in test_performance()
And then compare_models()
compare_performance()
;-)
compare_models()
is an alias for compare_parameters()
(and hence, located in parameters)
Maybe compare_models()
is better located in report, and both includes parameters and performance indices
That would be less confusing
Moved from https://github.com/strengejacke/sjstats/issues/67 over here...
@hauselin due to the re-organization of packages, all "model-performance" related stuff will now be implemented in the performance package.
@DominiqueMakowski What do you think, can we make
r2()
let accept multiple model objects, and when the user passes multiple models, we can make an "anova"-like output? I.e. the r-squared values for all models, and an extra column indicating the difference(s)?