Open tristers-at-square opened 2 years ago
Hi, thank you for raising the issue. From your code snippet, it seems you are not using distributed training?
Hi, thank you for raising the issue. From your code snippet, it seems you are not using distributed training?
Using Rabit to sync across an AWS cluster to do distributed training on AWS SageMaker as shown here in their official example:
From the logging output:
Things I've tried:
The training time was slow each time.
Thank you for running these experiments. It's probably due to the number of features. The approx was recently rewritten, the new version might be less efficient for wide dataset: https://github.com/dmlc/xgboost/pull/7214#issuecomment-990848637 . Also, the parameter sketch_eps
is replaced by max_bin
for aligning with hist
, the old default for max_bin
translated from sketch_eps
was around 63 while the rewritten one is 256, which means the new implementation builds larger histogram.
Do you have any suggestions for anything that can be done to recover the old level of performance for wide datasets like this, datasets with 5K to 10K features? We can try setting max_bin = 63. If I understand correctly, accuracy-wise it should be on-par with what we had before. Are there any other settings that would help?
The main reason we use the "approx" method over the "hist" method for many of our workloads is that "approx" used far less memory than "hist" in version 0.9. Is that still expected to be applicable in 1.6?
We want to upgrade to 1.6 because of all the great new features since 0.9, like early stopping, categorical feature support, etc. However, an 8x increase in running time is prohibitive in our case.
I recently tried updating from 0.90 to 1.60. However, my distributed training job (using the approx method) is ~8 times slower now. On version 0.90, each boosting round took about 25 seconds. On version 1.6, each boosting round is now taking around 3 minutes.
Even the hist method on v1.60 is slower than using approx on v0.90.
My dataset has ~5000 features and 500K rows. The exact same parameters and exact same data are being used in my training runs for both versions. The only difference is the version. I cannot share the dataset since it is a work dataset. Roughly 20% of the values in the data are null.
One thing I've noticed, on version 0.90, is that if I increase the nthread parameter the time taken per boosting round goes down. If I decrease the nthread parameter, the time taken per boosting round goes up. This makes sense.
However, on version 1.60, increasing or decreasing the nthread parameter doesn't seem to have any affect. I'm wondering if this is related in some way.
Here's the relevant code snippet:
Using 2 ml.m5.12xlarge (48vCPU's, 192 GiB RAM) on AWS SageMaker for each training job. Python 3.7.