tensorflow / decision-forests

A collection of state-of-the-art algorithms for the training, serving and interpretation of Decision Forest models in Keras.
Apache License 2.0
658 stars 108 forks source link

Crashed on Colab due to memory hungry #12

Closed frankwwu closed 3 years ago

frankwwu commented 3 years ago

TensorFlow Decision Forests appears being memory hungry. I compared it with PyCaret on Colab. TensorFlow Decision Forests crashed with the message “Your session crashed after using all available RAM.”, while PyCaret completed the work. Is there any feasible way to solve this problem?

achoum commented 3 years ago

Hi Frankwwu

Thanks for the post. There might be different things in play. Following are some possible explanation & solutions:

  1. TF-DF works (currently) by loading the entire dataset in memory. The memory usage is ~4 bytes per numerical and categorical values. For example, a dataset with 1M examples and 100 features will use 400MB. Colabs has a default RAM limit of 12GB (for everything; not just the dataset).

It might be interesting to increase it for large datasets.

  1. Categorical-set features will consume more than 4 bytes. At the start of the training, the logs will show what is the type of each feature.

Make sure the type of each feature is as expected.

  1. To speed-up training, numerical features are pre-sorted. This index consumes a significant amount of memory (~2x the dataset memory usage).

This index can be disabled as follow:

# Access to advanced hyper-parameters.
# See .proto sections in https://github.com/google/yggdrasil-decision-forests/blob/main/documentation/learners.md for mode details.
from yggdrasil_decision_forests.learner.random_forest import random_forest_pb2
from yggdrasil_decision_forests.learner.decision_tree import decision_tree_pb2

# Disable the pre-sorting of numerical features.
yggdrasil_training_config = tfdf.keras.core.YggdrasilTrainingConfig()
advanced_rf_config = yggdrasil_training_config.Extensions[random_forest_pb2.random_forest_config]
advanced_rf_config.decision_tree.internal.sorting_strategy = decision_tree_pb2.DecisionTreeTrainingConfig.Internal.SortingStrategy.IN_NODE
advanced_arguments = tfdf.keras.AdvancedArguments(yggdrasil_training_config=yggdrasil_training_config)
model = tfdf.keras.GradientBoostedTreesModel(num_trees=300, advanced_arguments=advanced_arguments)

with sys_pipes():
  model.fit(train_ds)

Note: I am setting a TODO to make it easier to disable the index construction.

  1. The size of the model is generally not an issue. However, an incorrectly configured training can lead to a large model. For example, training a classification model on a regression problem might lead to large models as each possible numerical value will be treated as a separate class.

Make sure the task argument of the model constructor is set appropriately.

  1. We are working on open sourcing solutions for large datasets (e.g. >1B examples).

In the meantime, a possible workaround is to train an ensemble of models where each model is trained on a small subset of the dataset.

achoum commented 3 years ago

TF-DF 0.1.6 introduces the parameter sorting_strategy to disable easily the creation of the index (and reduce the memory consumption significantly).

The following code is equivalent to the code snippet given above with AdvancedArguments.

model = tfdf.keras.GradientBoostedTreesModel(num_trees=300, sorting_strategy="IN_NODE")