AlineTalhouk / splendid

Supervised Learning Ensemble for Diagnostic Identification
https://alinetalhouk.github.io/splendid/
Other
1 stars 0 forks source link

Calibration #19

Closed AlineTalhouk closed 7 years ago

AlineTalhouk commented 7 years ago

Implementation of http://onlinelibrary.wiley.com/doi/10.1002/sim.7179/epdf and https://diagnprognres.biomedcentral.com/articles/10.1186/s41512-016-0002-x

AlineTalhouk commented 7 years ago

model 0- No change, use the exact same model as the training data model 1- Calibration in the large (intercept only) model 2- Full calibration slope and intercept model 3- Full calibration with shrinkage model 4- Re-fitting

AlineTalhouk commented 7 years ago

test 1 Model 0 vs model 4 test 2 Model 1 vs model 4 test 3 Model 2 vs model 4 test 4 Model 3 vs model 4

Dustin21 commented 7 years ago

@AlineTalhouk @dchiu911 I'm trying to get a likelihood ratio together for test purposes between two methods, but some of these packages seem limited and don't return their unconditional likelihood - I just see deviance & %dev (working with glmnet and nnet). If anyone has an idea of how I can obtain these without writing the likelihood functions myself, it would be greatly appreciated!

AlineTalhouk commented 7 years ago

@Dustin21 I am not sure I follow you. You only need it for the multinomial which it is given by function nnet::multinom

AlineTalhouk commented 7 years ago

remember you get the probabilities out of each method, but you apply multinom or glmnet for the calibration only

dchiu911 commented 7 years ago

Duplicate of #15