Per the codebook under https://nliulab.github.io/AutoScore/04-autoscore.html for binary outcomes, my understanding was, if the seed is set when splitting the datasets, the results for a given model should be replicable. But it is not producing the same results?
Are there other settings that need to be set through code to ensure a rerun of the same code/settings produces the same AutoScore model? My AUC is different in validation and test sets each time, even when I run the exact same code and data.
Per the codebook under https://nliulab.github.io/AutoScore/04-autoscore.html for binary outcomes, my understanding was, if the seed is set when splitting the datasets, the results for a given model should be replicable. But it is not producing the same results?
Are there other settings that need to be set through code to ensure a rerun of the same code/settings produces the same AutoScore model? My AUC is different in validation and test sets each time, even when I run the exact same code and data.