dmlc / xgboost

Scalable, Portable and Distributed Gradient Boosting (GBDT, GBRT or GBM) Library, for Python, R, Java, Scala, C++ and more. Runs on single machine, Hadoop, Spark, Dask, Flink and DataFlow
https://xgboost.readthedocs.io/en/stable/
Apache License 2.0
26.11k stars 8.7k forks source link

High memory consumption in python xgboost #5474

Closed pplonski closed 1 year ago

pplonski commented 4 years ago

I'm working on python AutoML package and one of my users reported very high memory usage while using xgboost.

I've made an investigation to show memory consumption by xgboost. You can find the notebook here. From the code, you can see that the model allocate over 7GB of RAM memory. When I save the model to hard disk (5 kB !) and then load it back I can save a huge amount of RAM.

For me, it look's like xgboost is storing the copy of data in its structure? Am I right?

Is there any way to slim down memory usage by xgboost? Do you think that saving model to the hard drive and then loading it back is way to handle this issue?

trivialfis commented 4 years ago

@pplonski We are trying to eliminate the copy for histogram algorithm. It's a working in progress. For GPU it is mostly done: https://github.com/dmlc/xgboost/pull/5420 https://github.com/dmlc/xgboost/pull/5465

CPU still has some more work to do.

SmirnovEgorRu commented 4 years ago

@pplonski, we implemented reducing memory consumption on CPU also in this PR https://github.com/dmlc/xgboost/pull/5334, but for 'hist' method only. It's included in master only for now, but I hope it will be a part of the future release.

memory, Kb Airline Higgs1m
Before 28311860 1907812
https://github.com/dmlc/xgboost/pull/5334 16218404 1155156
reduced: 1.75 1.65

Agree with @trivialfis, there are many things to do in the area.

dhruvrnaik commented 4 years ago

Hi, I have recently faced a similar high memory problem with xgboost. I am using 'gpu_hist' for training.

I notice large system memory spikes when train() method is executed, which leads to my jupyter kernel crashing.

  1. Is it correct to say that Xgboost is making a copy of my data in system RAM (even when I'm using 'gpu_hist')?
  2. I was under the assumption that xgboost loads the entire training data to the GPU. Is that also incorrect?
trivialfis commented 3 years ago

The memory usage has since improved a lot with inplace predict and DeviceQuantileDMatrix. Free free to try them out. One issue remain is sometimes CPU algorithm uses memory linear to n_threads * n_features.

newmanwang commented 2 years ago

xgboost.train() also consume a lot of memory(not gpu memory) when making a copy of Booster as returned model , in my case 9GB before bst.copy() and 34G after. How about not making copy as a option?

trivialfis commented 2 years ago

What about not making copy as a option?

I'm working on it.

newmanwang commented 2 years ago

I am not really sure whether Booter.save_model() would produce exactly the same model file If bst.copy() is ommited when Booster.train() return. Would it be safe that Bootsr.train() simply return bst directly without copy If nothing happen between Bootster.train() and Bootser.save_model()? I' m hoping that it won't make any difference on making prediction on the model produced. xgboost-1.5.1 @trivialfis

trivialfis commented 2 years ago

Copy is exact, no change happens during the copy.

newmanwang commented 2 years ago

Copy is exact, no change happens during the copy.

Thanks for the reply, really appreciate it. Looking forward for the upcoming release!

trivialfis commented 1 year ago

We have implemented QuantileDMatrix for hist tree method and set it as default for sklearn, along with which, inplace prediction is used for sklearn whenever feasible. Other than these, some additional memory reduction optimizations for hist are implemented. Lastly, UBJSON is used as the default serialization format for pickle. I think we can close this issue and consider other memory reduction work as further optimization.