Closed juanharagon closed 8 months ago
Hi
I am happy the package is useful!
Bootstrapping after model averaging is non-trivial and not something I have done myself. And it is beyond my level of statistical expertise! There remains "no generally reliable analytical methods to calculate frequentist confidence intervals (or P-values) on model-averaged predictions", which may limit you here!
I would probably recommend if you have several equally good models looking at where they differ and choosing a single model to go forward if you want to bootstrap, or to do the model average approach and accept there is no bootstrap. If you're best model correlate strongly in terms of their fit, it probably hints that choosing a "best" model from that set is not going to impact the uncertainty predictions very much, as they predict the curve the same.
If you want to read more, here are some links that might be helpful:
Sorry I cannot be more helpful.
Yeah, agree with @padpadpadpad here.
There is not standard way to propagate non-parameteric error estimates (e.g., using bootstrapping) through to model averaging. You can do some creative simulations, but the effort might be worth it.
The three papers listed above should give you in some useful directions!
Indeed, sorry we cannot be more useful!
Thank you both for your answers!
congratulations on your hard work!
El sáb, 23 mar 2024 a las 4:26, mhasoba @.***>) escribió:
Yeah, agree with @padpadpadpad https://github.com/padpadpadpad here.
There is not standard way to propagate non-parameteric error estimates (e.g., using bootstrapping) through to model averaging. You can do some creative simulations, but the effort might be worth it.
The three papers listed above should give you in some useful directions!
Indeed, sorry we cannot be more useful!
— Reply to this email directly, view it on GitHub https://github.com/padpadpadpad/rTPC/issues/56#issuecomment-2016393911, or unsubscribe https://github.com/notifications/unsubscribe-auth/ANFGKFU2SOCPSQZJOABHDR3YZUVB5AVCNFSM6AAAAABFDKSABKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJWGM4TGOJRGE . You are receiving this because you authored the thread.Message ID: @.***>
-- Mag. en Ciencias Biológicas Juan Hector Aragon Traverso
Doctorando - CONICET
Instituto de Ciencias Básicas
Facultad de Filosofía Humanidades y Artes
Universidad Nacional de San Juan
Av. José Ignacio de la Roza 230 (O).
Capital. San Juan. CP: 5400
Tel: +54 0264 422-2643 Int: 143
https://grupoecra.wordpress.com/
@.***
Hello! First of all congratulations on this excellent code and thank you so much for sharing it with the community. rTPC is making this kind of analysis better for everyone, so it is an incredible contribution to science.
I am currently working with spider thermal performance curves using climbing speed as a proxy. I usually have between 5 and 6 different experimental temperatures, including CTmax and CTmin.
I ran into a problem trying to combine some concepts. I wanted to do model averaging and Bootstrap. Is this useful in your opinion? if so, would it be better to bootstrap all models before averaging, or to do an average model and later bootstrap it? each model should also contains several curves since we are comparing among species with several replicates within each one.
Thank you