Often, it is rather tedious/difficult to inspect the output of some task or the status of the config inst at a certain point of the analysis workflow. To simplify this, three tasks are added with this PR:
hbw.ColumnsBaseTask provides a base implementation on how to require columns from reduction, production + ml evaluation combined. It's only a base task, so it cannot be run, but should be inherited of by other tasks.
hbw.CheckColumns provides a simple implementation of reading these outputs and checking, which columns are present in each file
hbw.CheckConfig only loads the config inst after all the CSP+ML inits have been run and prints some infos. It requires nothing and produces nothing
All three tasks own the typical parameters (selector, producers, ml_models, dataset, ...) and should also resolve defaults+groups as usual.
The hbw.Check* tasks also provide the --debugger parameter, which starts a ipython session at the end of the task.
Often, it is rather tedious/difficult to inspect the output of some task or the status of the config inst at a certain point of the analysis workflow. To simplify this, three tasks are added with this PR:
hbw.ColumnsBaseTask
provides a base implementation on how to require columns from reduction, production + ml evaluation combined. It's only a base task, so it cannot be run, but should be inherited of by other tasks.hbw.CheckColumns
provides a simple implementation of reading these outputs and checking, which columns are present in each filehbw.CheckConfig
only loads the config inst after all the CSP+ML inits have been run and prints some infos. It requires nothing and produces nothingAll three tasks own the typical parameters (selector, producers, ml_models, dataset, ...) and should also resolve defaults+groups as usual. The hbw.Check* tasks also provide the
--debugger
parameter, which starts a ipython session at the end of the task.