Closed ClimbsRocks closed 8 years ago
finished. i chose to keep the functionality of not training an alog in the ensemble round if it failed to do well in the earlier round. the reasoning is pretty simple- if it didn't do well on this exact dataset earlier, it probably won't do well now. plus, it saves quite a bit of time.
right now it seems that it doesn't start training up another algo, just kind of permanently blocking those cores it was allocated.
it also seems to not train that algo for ensembler if we didn't have any of them training for the stage 0 predictions