Closed banderlog closed 3 years ago
Thanks for your comments!
I agree that there are a lot of other benchmarks for MIMIC-III based on different feature engineering, algorithms, and metrics. Different from the paper you mentioned, our work is looking into medical codes (diagnosis/lab/procedures) which enables our model to also apply to outpatients (like the Alzheimer's Disease prediction task). Therefore, we didn't include some scores specific for the ICU patients, like SAPS5, SAPS II and etc. Also, in the paper you mentioned, the AUPRC of XGBoost is 0.665 (despite differences in feature choice).
Hope it addresses your question!
Also, in the paper you mentioned, the AUPRC of XGBoost is 0.665 (despite differences in feature choice).
Yeah, sounds like another reason to include it into the final table)
Hi, why you did not include XGboost results into your model scores table? It is like de-facto ML standard for table data.
As far as I was able to understand, you tried to predict mortality using 24h from admission data:
You took all MIMIC-III patients or some cohort, e.g. patients with sepsis?
XGbost results for all patient mortality prediction using 24h from admission data: Johnson, A. E. W. & Mark, R. G. Real-time mortality prediction in the Intensive Care Unit. AMIA Annu. Symp. Proc.2017, 994–1003 (2018).