nf-core / deepmodeloptim

Stochastic Testing and Input Manipulation for Unbiased Learning Systems
https://nf-co.re/deepmodeloptim
MIT License
23 stars 9 forks source link

[feat] add an early stopper for ray tune #142

Closed suzannejin closed 1 month ago

alessiovignoli commented 4 months ago

ASHAScheduler already early stops underporfoming trials. This means that badly performing models wil be cut shoprtly. However overall it will still tune up to max_t even when no trial is improving. So to early stop the overall run maybe the RunConfig stop criteria (it can be a function) can be explored.

mathysgrapotte commented 1 month ago

cf stimulus-py