modelfoxdotdev / modelfox

ModelFox makes it easy to train, deploy, and monitor machine learning models.
Other
1.46k stars 63 forks source link

Optimizing the size of the model #48

Closed vks closed 2 years ago

vks commented 2 years ago

Which hyperparameters are the most important ones for minimizing the size of a Gradient Boosted Tree model? From my experiments so far, it seems like min_examples_per_node and max_rounds have the biggest effect.

isabella commented 2 years ago

Hi! Yes you are right! Those two parameters control the size of the model.

The size of the gradient boosted decision tree model is determined by the number of nodes in all of the trees. You can control how many nodes are in the model by limiting the number of nodes in each of the trees and the total number of trees trained.

The most straightforward way to control the model size would be with the max_depth, max_rounds, and max_leaf_nodes parameters.

These parameters control the size of the model:

If model size is a really big concern, you can also train linear models which will be smaller.

Currently, the .tangram file contains both the report that we display in the tangram app and also the model that you will use to make predictions. We plan on adding the ability to produce an optimized model used just for predictions that strips all of the reporting information. Here is the issue I just created to track that: https://github.com/tangramdotdev/tangram/issues/49.

I'm going to keep this issue open until we add documentation to our website explaining this!

Also, does your dataset contain text columns?

vks commented 2 years ago

Thanks for the very detailed answer, this helps and clarifies the effect of the hyperparameters! It would be great to have this information added to the docs.

The dataset I'm looking at contains 30 float columns and a binary enum column as target.

isabella commented 2 years ago

Great! I'll make sure to add it to the docs :)

The reason I asked about text columns is because we by default create a large number of features and that could greatly increase model size and we are adding support to customize that now.

vks commented 2 years ago

It seems like a tree Node has a size of 72 bytes (as determined by the patch below).

So the binary classifier should have a size of approximately <72 bytes> * <average number of nodes per tree> * <number of trees>, which should be less than <72 bytes> * max_leaf_nodes * max_rounds, right? (This neglects branch nodes, but as far as I can see their number is not directly limited?)

Patch ```diff diff --git a/crates/tree/lib.rs b/crates/tree/lib.rs index fe030f8..1691bbe 100644 --- a/crates/tree/lib.rs +++ b/crates/tree/lib.rs @@ -124,6 +124,11 @@ pub struct Tree { pub nodes: Vec, } +#[test] +fn node_size() { + assert_eq!(std::mem::size_of::(), 0); +} + impl Tree { /// Make a prediction. pub fn predict(&self, example: &[tangram_table::TableValue]) -> f32 { ```
isabella commented 2 years ago

<number of branch nodes> = <number of leaf nodes> - 1

So, the total number of nodes in any given tree is 2 * <number of leaf nodes> - 1 <= 2 * max_leaf_nodes - 1 which means the total number of nodes (leaf nodes and branch nodes) in all of the trees is less than 2 * max_leaf_nodes * max_rounds.

The serialized size of the Branch and Leaf nodes is different from the in-memory size. I can look into this and get back to you on the exact sizes of each of those nodes.

vks commented 2 years ago

So max_leaf_nodes also limits the branch nodes directly. Thanks for clarifying!

The serialized size of the Branch and Leaf nodes is different from the in-memory size. I can look into this and get back to you on the exact sizes of each of those nodes.

Tangram seems to be using a binary serialization format, so I would expect the serialized size to be similar to the in-memory size (maybe minus the padding, and plus the data for the report). I was just trying to estimate what model sizes I should expect, so the exact sizes are not necessary, thank you!