Klepac-Ceraj-Lab / Resonance

MIT License
1 stars 0 forks source link

Revisit hyperparameter optimization in light of overtitting #108

Closed Hugemiler closed 1 year ago

Hugemiler commented 1 year ago

Safekeeping this old paragraph for the manuscript:

"Heavy constraints were placed in the maximum depth of each tree, the number of factors considered at each split and the amount of samples in each leaf, in order to privilege generalizability and perplexity of the trained models, especially considering the very limited amount of samples. For each stochastic split, ~6k different model hyperparameter combinations were benchmarked from a predetermined optimization grid..."