Xtra-Computing / thundergbm

ThunderGBM: Fast GBDTs and Random Forests on GPUs
Apache License 2.0
689 stars 84 forks source link

Doesn't release GPU memory after fit and predict... #36

Closed counterpoint1 closed 1 year ago

counterpoint1 commented 4 years ago

I am watching GPU memory in task manager, and notice that thundergbm is not releasing GPU memory in between runs. Only killing the console and restarting forces a release of the GPU memory.

Are there any commands, in line with xgboost's .del call that let me accomplish this?

Thanks.

Kurt-Liuhf commented 4 years ago

Hi @counterpoint1, thanks for your feedback. We have tried the models for classification, regression and ranking, but we could not detect any memory leaking in ThunderGBM. Please make sure you are using the newest version of ThunderGBM. If there is still a memory leaking problem, would you mind uploading your code so that we can reproduce your result, which may be helpful for fixing this issue?

counterpoint1 commented 4 years ago

Very simple. It's regression. The problem is it doesn't release the memory from the GPU upon finishing a run. xgboost gpu version has the same problem, but there is a workaround, calling the del method, i.e:

RF = model.predict(xgtest)
**model.__del__()**

Here is thundergbm... We just loop this over and over, with a dynamically changing training and target set, i.e. Pretty simple implementation. This is running latest 0.34 wheel on your website.

clf = TGBMRegressor()

XX=Sub_train2.values
YY=target2

clf.n_trees=1
clf.n_parallel_trees=50
clf.depth=13
clf.bagging=0
clf.verbose=2
clf.learning_rate=.5
clf.min_child_weight=20
clf.column_sampling_rate=.5
clf.gamma=.6

clf.fit(XX,YY)

RF2=clf.predict(Sub_predict2)
Kurt-Liuhf commented 4 years ago

Hi @counterpoint1, we have built a new version of ThunderGBM for window users. You should download it from here tgbm.whl and reinstall ThunderGBM. The newest version should satisfy your need. Thanks.