Open shuDaoNan9 opened 2 years ago
@JWenBin I would recommend to try to avoid numBatches if possible. All it does is split the dataset into multiple batches, train the model on the first batch, then retrain it on the second batch, etc. It will be much less likely to run out of memory, as you increase the number of batches, but it will also likely decrease model accuracy and increase training time. It looks like you have 26320507 rows - how many columns are there? What is your cluster size - the number of nodes, CPUs per machine and RAM per machine? Based on this submission command you posted: --num-executors 1 --executor-memory 50G --executor-cores 47 It looks like you are using a single executor with 50GB RAM allocated to it and 47 cores to run distributed training: https://community.cloudera.com/t5/Support-Questions/Spark-num-executors-setting/td-p/149434 The --num-executors is the total number of executors, did you intend to run all training on just a single machine/executor?
@imatiach-msft Thank you for your Reply! In this case, my dataset have 26320507 rows 38 columns. And I used a single machine(48 vCore, 96 GiB memory) for test, because I find it has faster training speed than multiple machines/executors in my case. It looks like 'java.lang.NegativeArraySizeException' only occurs in this test case. In fact, In our production environment the data set are more than 60m rows 39 columns. Everything works well then. It takes about 20 minutes to training it on a single machine/executor(96 vCore, 192 GiB memory) in the same method. I tried use 3 machine/executor(36 vCore, 72 GiB memory) instead before, however, it took me more time to train the model and cost more money.
My multiple machines/executors case code:
val classifier = new LightGBMClassifier()
.setLabelCol("click")
.setObjective("binary")
.setCategoricalSlotNames(Array("ua_index","uid_index",......))
.setUseBarrierExecutionMode(true)
.setFeaturesCol("gbdtFeature")
.setPredictionCol("predictPlay")
.setNumIterations(5000)
.setNumLeaves(32)
.setLearningRate(0.002)
.setProbabilityCol("probabilitys")
.setEarlyStoppingRound(200).setBoostingType("gbdt").setLambdaL2(0.002).setMaxDepth(24)
val lgbmModel =classifier.fit(trainDF.repartition(numOfMachines)) // 'repartition(numOfMachines)' may means lower network communication.
My multiple machines/executors case submit information: spark-submit --master yarn --driver-cores 2 --driver-memory 5G --num-executors 3 --executor-memory 50G --executor-cores 35 --conf spark.driver.maxResultSize=3G --conf spark.yarn.heterogeneousExecutors.enabled=false --conf spark.dynamicAllocation.enabled=false --jars ......
My multiple machines/executors case cluster information: 36 vCore, 72 GiB memory * 3 nodes. Spark 3.1.2
Thank you very much!
hi @JWenBin I wonder if there is some sort of overflow happening here in this function when you hit the "NegativeArraySizeException": https://github.com/microsoft/LightGBM/blob/master/swig/lightgbmlib.i#L37
It looks like your dataset is ~8GB of size (26320507 rows 38 columns 8 bytes per value), but I don't think it should run OOM. We do create a second copy of the dataset in memory when passing it to lightgbm currently, but that would still only be ~16 GB, and even with spark taking up quite a bit more memory it should still be quite below 50GB I think.
It would be interesting to add some debugging messages around that function to see what value specifically is negative. Perhaps it's this line (the buffer_len variable?):
char* dst = new char[buffer_len];
or something else. I can't quite tell since the stack trace doesn't give the exact line's code or line number. I just see that it's coming from that function "LGBM_BoosterSaveModelToStringSWIG":
Caused by: java.lang.NegativeArraySizeException
at com.microsoft.ml.lightgbm.lightgbmlibJNI.LGBM_BoosterSaveModelToStringSWIG(Native Method)
Which isn't actually doing anything with the dataset but just saving the model.
Sorry to bother with an year-old issue, but did you find the solution (or reason)? I'm experiencing the same issue. Strange thing is it only happens when I set the high boosting rounds (e.g. 7,000), but no issue happens with low boosting rounds e.g. 2000.
If I do not set numBatches, there will be ‘NegativeArraySizeException’ or ‘OOM’ during trainning big dataset (about 26320507 rows), and the cpu utilization will be less than 90%. But if I set numBatches, it will take more time to train the model, although the cpu utilization will be about 97% during each batch.
submit: spark-submit --master yarn --driver-cores 2 --driver-memory 5G --num-executors 1 --executor-memory 50G --executor-cores 47 --conf spark.driver.maxResultSize=3G --conf spark.yarn.heterogeneousExecutors.enabled=false --conf spark.dynamicAllocation.enabled=false --jars ......
code:
NegativeArraySizeException log:
Thank you!
AB#1884827