Without having looked through all the PR's it would be cool to have unit-tests
wrt the following criteria:
Test that results from functions are as expected:
Do not just test whether class fits, but that content makes sense as well for example:
task$data() returns the expected variable types / column names / row order
predictions match the data, no row-shuffling occurs
Test all allowed and not-allowed edge-cases:
What happens for one-row / two-row / one-column / multiple-columns / with or without
exogeneous variables?
Do I get a meaningful error in cases where a learner can not deal with some feature type?
Do the resampling methods create meaningful methods or errors for one-row tasks etc.
Without having looked through all the PR's it would be cool to have unit-tests wrt the following criteria:
task$data()
returns the expected variable types / column names / row orderpredictions
match the data, no row-shuffling occurs