david-cortes / isotree

(Python, R, C/C++) Isolation Forest and variations such as SCiForest and EIF, with some additions (outlier detection + similarity + NA imputation)
https://isotree.readthedocs.io
BSD 2-Clause "Simplified" License
186 stars 38 forks source link

Isotree Isolation forest generating large model pickle file #48

Closed jimmychordia closed 1 year ago

jimmychordia commented 1 year ago

I am building anomaly detection model using isotree and the model pickle file if I dump via joblib without any compression, generates file of size 65GB. To load this model file for any realtime scoring requires around 256GB RAM for loading it into a python object and then scoring the new data. Is there any better way to do this or any tips on reducing the model size without impacting the accuracy of the model.

david-cortes commented 1 year ago

Thanks for the bug report. A couple questions:

jimmychordia commented 1 year ago

Below are my answers in bold

  1. Is this happening with the latest version of isotree? Currently, this is version 0.5.17-5. - using 0.5.17

  2. Does it also happen if you use the method export_model instead of pickle?
    - Using both the methods size is 65GB only after export

  3. Are you passing non-default parameters to pickle? (e.g. some specific protocol, compression, etc.) joblib.dump(iso_model, 'model.pkl'.format(prod)) joblib.dump(iso_model, 'model_compresses.pkl'.format(prod), compress = 9) with export_model same size issue

  4. What kind of hyperparameters are you using? iso_model = IsolationForest( ntrees=20,nthreads=1,categ_split_type="single_categ",scoring_metric="density",ndim=1,                       missing_action="impute", random_state = 1) iso_model.fit_transform(data_sub.drop(drop,axis = 1))

  5. Are you using a regular HDD/SDD or are you trying to write to e.g. a networked drive, or some other special storage media? I am running the code on Sagemaker Studio Notebook and using S3 for storage

david-cortes commented 1 year ago

Thanks for the information. So it seems there's no bug in here: if you call fit_transform, then there will not be any row sub-sampling applied to trees, and if you use the default value for max_depth, then that in turn will be determined from the full number of rows. And if the number of rows is very large, then the depth of each tree will also be very large, which will lead to producing very heavy models.

Additionally, if using it as a missing value imputer (as it does when using fit_transform), the trees that it builds for imputation are expected to be much heavier than the trees for producing anomaly scores.

As for what you could do: if the amount of rows is very large, you should call fit and perhaps manually change the number of rows per tree (parameter sample_size). Then the models should become smaller. Also, if you don't plan to use it as a missing value imputer, you shouldn't call fit_transform, nor pass build_imputer.