easystats / performance

:muscle: Models' quality and performance metrics (R2, ICC, LOO, AIC, BF, ...)
https://easystats.github.io/performance/
GNU General Public License v3.0
1.03k stars 94 forks source link

r2-differences #28

Open strengejacke opened 5 years ago

strengejacke commented 5 years ago

Moved from https://github.com/strengejacke/sjstats/issues/67 over here...

@hauselin due to the re-organization of packages, all "model-performance" related stuff will now be implemented in the performance package.

@DominiqueMakowski What do you think, can we make r2() let accept multiple model objects, and when the user passes multiple models, we can make an "anova"-like output? I.e. the r-squared values for all models, and an extra column indicating the difference(s)?

DominiqueMakowski commented 5 years ago

I guess that would make sense as models' performance indices are very often used to compare models... I wonder about the syntax tho, do we need something implicit like r2(..., named_args) with wich which we could do r2(model1, model2, model3, named_arg=FALSE) or is it better to leave the current behaviourand accept lists of models as the first argument r2(c(model1, model2, model3), named_arg=FALSE).

This could be later extended to model_performance() (or a new compare_performance? that would open a new types of functions compare_*), that would compute and compare or possibles indices (i.e., all indices that are compatible with all the models)

strengejacke commented 5 years ago

Currently, r2() is defined as r2 <- function(model, ...) {. I would say we just make it r2(...) (or probably r2(x, ...) ) and capture the models with list(...) or so inside the function.

DominiqueMakowski commented 5 years ago

Agreed.

For the sake of flexibility, we might want to check the provided arguments, to see if they are (compatible) models (i.e. statistical models). Maybe we could add a small is_model in insight, and run this check on the provided arguments in r2(...) (models_to_compare <- allargsinellipsis[is_model(allargsinellipsis)] to increase stability?

strengejacke commented 5 years ago

Sounds good! is_model() would be a long list of inherits()-commands, I guess ;-) For comparison, should we also check if all models are of the same type? (i.e. no mixing from lm, glm, coxph etc.)

strengejacke commented 5 years ago

I think we should check the input with all_models_equal() because it makes no sense to compare r-squared values from different distributional families.

strengejacke commented 5 years ago

Might be something for later than initial release... It requires some work esp. for more complex R2 measures like Bayes or Nakagawa.

DominiqueMakowski commented 5 years ago

Agree, this can be improved later on

mattansb commented 4 years ago

I suggest implementing this in compare_performance() as R2_delta for linear models only (the only ones for which this really makes sense (As for GLMs the total "variance" on the latent scale increases with model complexity... which is weird...).

We might then also add Cohens_f2:

image

DominiqueMakowski commented 3 years ago

The R2 diff could nicely fit in test_performance especially if there are any CI/significance that we could derive from it 😏

I wonder if, in general, we should have a difference_performance() utility function or a difference=TRUE arg in compare_performance() that just displays the difference instead of the raw indices? (which basically sapply(compare_performance, diff))

strengejacke commented 3 years ago

we should have a difference_performance()

No.

or a difference=TRUE arg in compare_performance() that just displays the difference instead of the raw indices?

Difference to what? Models as entered in their order? I'm not sure this is informative for most indices, is it?

bwiernik commented 3 years ago

Olkin, Alf, and Graf have each variously developed CIs of various flavors for R2 differences.

mattansb commented 3 years ago

I was teaching yesterday and I was called out by students "You don't have delta-R2 available??" This is embarrassing guys....

bwiernik commented 3 years ago

Haha. Tell your students that it's honestly not a clear problem to solve when you aren't incorporating the incremental validity into your model (ala some specific SEM models) or via bootstrapping 😜

DominiqueMakowski commented 3 years ago

We should definitely have some difference-related capabilities, and R2 seems like the best place to start

bwiernik commented 3 years ago

∆R^2^ and ∆R and √∆R are all good statistics to that end. As a start, bootstrapping would be a great method for intervals/p. (Honestly, they are often the best estimators compared to delta-method; proper analytic are a pain).

strengejacke commented 2 years ago

Is this a valid or useful measure at all? https://twitter.com/brodriguesco/status/1461604815759417344?s=20

bwiernik commented 2 years ago

That tweet is just referencing the distinction between R2 and adjusted/cross-validated R2

strengejacke commented 2 years ago

cross-validated R2?

bwiernik commented 2 years ago

Out of sample R2 (either actually computed in a hold out sample or via leave-one-out or via an analytic approximation)

bwiernik commented 2 years ago

https://journals.sagepub.com/doi/abs/10.1177/1094428106292901?casa_token=QnJ3HAUoBFEAAAAA:Un99_4wYO9dp8i7uM5Pkdwh3surUpUS9pLV294PciaCe8r2AWTfY14KHiLr5yxwJnve3HGEI92SM

strengejacke commented 2 years ago

not sure why this is out of sample/cross validated, since predictors are added, but no additional / different data sets?

bwiernik commented 2 years ago

I mean that his tweet is lamenting that in-sample R2 is positively biased. It is absolutely meaningful to compare models on R2--the solution to his concern is that an unbiased R2 estimator should be used.

strengejacke commented 2 years ago

Ah, ok. Was a bit confused, because we were not discussing cross validated R2 here.

DominiqueMakowski commented 2 years ago

Saw a recent tweet where @bwiernik mentioned R2 differences, I'd suggest implementing it first in a function called test_r2(), and then perhaps incorporated in test_performance()

bwiernik commented 2 years ago

And then compare_models()

strengejacke commented 2 years ago

compare_performance() ;-) compare_models() is an alias for compare_parameters() (and hence, located in parameters)

strengejacke commented 2 years ago

Maybe compare_models() is better located in report, and both includes parameters and performance indices

bwiernik commented 2 years ago

That would be less confusing