Closed vks closed 2 years ago
Hi! Yes you are right! Those two parameters control the size of the model.
The size of the gradient boosted decision tree model is determined by the number of nodes in all of the trees. You can control how many nodes are in the model by limiting the number of nodes in each of the trees and the total number of trees trained.
The most straightforward way to control the model size would be with the max_depth
, max_rounds
, and max_leaf_nodes
parameters.
These parameters control the size of the model:
max_rounds
: The max_rounds
parameter controls the total number of trees in the model. max_depth
: The max_depth
parameter will limit the number of nodes in each individual tree because trees will only be allowed to grow to that depth
. max_leaf_nodes
: Decreasing the parameter max_leaf_nodes
will also control the number of total nodes in your trees, because once this many leaf_nodes have been added, the tree will no longer continue growing. min_examples_per_node
: Increasing the min_examples_per_node
parameter limits the number of nodes in the tree as well because during the training process, if a node in the tree has less than min_examples_per_node
, the node will not be added to the tree. min_sum_hessians_per_node
: Increasing this parameter will prevent overfitting as well as limit the number of nodes added to an individual tree. min_gain_to_split
: Increasing this parameter will also prevent overfitting and limit the number of nodes added to an individual tree. If model size is a really big concern, you can also train linear models which will be smaller.
Currently, the .tangram
file contains both the report that we display in the tangram app and also the model that you will use to make predictions. We plan on adding the ability to produce an optimized model
used just for predictions that strips all of the reporting information. Here is the issue I just created to track that: https://github.com/tangramdotdev/tangram/issues/49.
I'm going to keep this issue open until we add documentation to our website explaining this!
Also, does your dataset contain text columns?
Thanks for the very detailed answer, this helps and clarifies the effect of the hyperparameters! It would be great to have this information added to the docs.
The dataset I'm looking at contains 30 float columns and a binary enum column as target.
Great! I'll make sure to add it to the docs :)
The reason I asked about text columns is because we by default create a large number of features and that could greatly increase model size and we are adding support to customize that now.
It seems like a tree Node
has a size of 72 bytes (as determined by the patch below).
So the binary classifier should have a size of approximately <72 bytes> * <average number of nodes per tree> * <number of trees>
, which should be less than <72 bytes> * max_leaf_nodes * max_rounds
, right? (This neglects branch nodes, but as far as I can see their number is not directly limited?)
<number of branch nodes> = <number of leaf nodes> - 1
So, the total number of nodes in any given tree is 2 * <number of leaf nodes> - 1 <= 2 * max_leaf_nodes - 1
which means the total number of nodes (leaf nodes and branch nodes) in all of the trees is less than 2 * max_leaf_nodes * max_rounds
.
The serialized size of the Branch
and Leaf
nodes is different from the in-memory size. I can look into this and get back to you on the exact sizes of each of those nodes.
So max_leaf_nodes
also limits the branch nodes directly. Thanks for clarifying!
The serialized size of the
Branch
andLeaf
nodes is different from the in-memory size. I can look into this and get back to you on the exact sizes of each of those nodes.
Tangram seems to be using a binary serialization format, so I would expect the serialized size to be similar to the in-memory size (maybe minus the padding, and plus the data for the report). I was just trying to estimate what model sizes I should expect, so the exact sizes are not necessary, thank you!
Which hyperparameters are the most important ones for minimizing the size of a Gradient Boosted Tree model? From my experiments so far, it seems like
min_examples_per_node
andmax_rounds
have the biggest effect.