Closed pplonski closed 1 year ago
@pplonski We are trying to eliminate the copy for histogram algorithm. It's a working in progress. For GPU it is mostly done: https://github.com/dmlc/xgboost/pull/5420 https://github.com/dmlc/xgboost/pull/5465
CPU still has some more work to do.
@pplonski, we implemented reducing memory consumption on CPU also in this PR https://github.com/dmlc/xgboost/pull/5334, but for 'hist' method only. It's included in master only for now, but I hope it will be a part of the future release.
memory, Kb | Airline | Higgs1m |
---|---|---|
Before | 28311860 | 1907812 |
https://github.com/dmlc/xgboost/pull/5334 | 16218404 | 1155156 |
reduced: | 1.75 | 1.65 |
Agree with @trivialfis, there are many things to do in the area.
Hi, I have recently faced a similar high memory problem with xgboost. I am using 'gpu_hist' for training.
I notice large system memory spikes when train()
method is executed, which leads to my jupyter kernel crashing.
The memory usage has since improved a lot with inplace predict and DeviceQuantileDMatrix
. Free free to try them out. One issue remain is sometimes CPU algorithm uses memory linear to n_threads * n_features
.
xgboost.train() also consume a lot of memory(not gpu memory) when making a copy of Booster as returned model , in my case 9GB before bst.copy() and 34G after. How about not making copy as a option?
What about not making copy as a option?
I'm working on it.
I am not really sure whether Booter.save_model() would produce exactly the same model file If bst.copy() is ommited when Booster.train() return. Would it be safe that Bootsr.train() simply return bst directly without copy If nothing happen between Bootster.train() and Bootser.save_model()? I' m hoping that it won't make any difference on making prediction on the model produced. xgboost-1.5.1 @trivialfis
Copy is exact, no change happens during the copy.
Copy is exact, no change happens during the copy.
Thanks for the reply, really appreciate it. Looking forward for the upcoming release!
We have implemented QuantileDMatrix
for hist
tree method and set it as default for sklearn, along with which, inplace prediction is used for sklearn whenever feasible. Other than these, some additional memory reduction optimizations for hist are implemented. Lastly, UBJSON is used as the default serialization format for pickle. I think we can close this issue and consider other memory reduction work as further optimization.
I'm working on python AutoML package and one of my users reported very high memory usage while using xgboost.
I've made an investigation to show memory consumption by xgboost. You can find the notebook here. From the code, you can see that the model allocate over 7GB of RAM memory. When I save the model to hard disk (5 kB !) and then load it back I can save a huge amount of RAM.
For me, it look's like xgboost is storing the copy of data in its structure? Am I right?
Is there any way to slim down memory usage by xgboost? Do you think that saving model to the hard drive and then loading it back is way to handle this issue?