Testing slices are asked for in the train_model function, where they are actually not even used at all (it is not used in RandomForestClassifier).
Maybe this is a remnant from an older version of the code, like when confusion matrices were actually produced (e.g. like the functions not used anymore here and here). Let's check if they are actually used. If not, let's remove it.
Still, that does not mean we shouldn't ask users to have extra slices to test/cross-validate the predictions. Personally, I would rather suggest drawing a lot of slices for one scan, test that scan to see how many training slices should be used (incl. cross-validation on the labelled slices not used as we did in the methods paper). That way, users can focus on doing a very good test on one stack, and then using those settings on the other stacks (and not draw slices that are not really of use).
I have stopped using them for the segmentations that are done in our group. Should be removed in future major update of the code. Maybe with the integration of the ubiquitousTraits branch.
Testing slices are asked for in the
train_model
function, where they are actually not even used at all (it is not used inRandomForestClassifier
).Maybe this is a remnant from an older version of the code, like when confusion matrices were actually produced (e.g. like the functions not used anymore here and here). Let's check if they are actually used. If not, let's remove it.
Still, that does not mean we shouldn't ask users to have extra slices to test/cross-validate the predictions. Personally, I would rather suggest drawing a lot of slices for one scan, test that scan to see how many training slices should be used (incl. cross-validation on the labelled slices not used as we did in the methods paper). That way, users can focus on doing a very good test on one stack, and then using those settings on the other stacks (and not draw slices that are not really of use).