The multi-view axis-aligned splitter used the inheritance scheme of oblique splitters to implement the method for sampling multiple feature sets. However, this comes with some downsides. It does not track constant features and inherently requires you to sample unnecessarily a projection vector within each split node.
Tracking constants can help improve runtime as during DFS/BFS building of the tree, subtrees are not explored if all are constant. Moreover, not sampling a projection vector may help the segfault issue related to RAM spikes due to constantly initializing and pushing indices/weights related to the projection.
Changes proposed in this pull request:
We add an extra loop to scikit-learn's node_split implementation and track now a vector of n_total_constant_features per split node, which is passed to the tree, which is then passed back to its children nodes. This is a bit hacky
Adds a minor change in the submodule that removes the argument n_constant_features from splitter.node_split(...), and assigns it into the splitrecord. We probably want some kind of unit-tests to ensure this works properly, similar to scikit-learn's existing constant feature tree tests
Before submitting
[ ] I've read and followed all steps in the Making a pull request
section of the CONTRIBUTING docs.
[ ] I've updated or added any relevant docstrings following the syntax described in the
Writing docstrings section of the CONTRIBUTING docs.
[ ] If this PR fixes a bug, I've added a test that will fail without my fix.
[ ] If this PR adds a new feature, I've added tests that sufficiently cover my new functionality.
After submitting
[ ] All GitHub Actions jobs for my pull request have passed.
Related to #226 and #235
The multi-view axis-aligned splitter used the inheritance scheme of oblique splitters to implement the method for sampling multiple feature sets. However, this comes with some downsides. It does not track constant features and inherently requires you to sample unnecessarily a projection vector within each split node.
Tracking constants can help improve runtime as during DFS/BFS building of the tree, subtrees are not explored if all are constant. Moreover, not sampling a projection vector may help the segfault issue related to RAM spikes due to constantly initializing and pushing indices/weights related to the projection.
Changes proposed in this pull request:
n_total_constant_features
per split node, which is passed to the tree, which is then passed back to its children nodes. This is a bit hackyn_constant_features
fromsplitter.node_split(...)
, and assigns it into the splitrecord. We probably want some kind of unit-tests to ensure this works properly, similar to scikit-learn's existing constant feature tree testsBefore submitting
CONTRIBUTING
docs.CONTRIBUTING
docs.After submitting