NVIDIA / spark-rapids

Spark RAPIDS plugin - accelerate Apache Spark with GPUs
https://nvidia.github.io/spark-rapids
Apache License 2.0
826 stars 236 forks source link

[BUG] significant slow down with ParquetCachedBatchSerializer and pyspark CrossValidator #5975

Open eordentlich opened 2 years ago

eordentlich commented 2 years ago

Describe the bug First observed when attempting to run pyspark's CrossValidator + VectorAssembler + pyspark version of XGBoost under review in this PR: https://github.com/dmlc/xgboost/pull/8020. Parts of this should fall back to cpu due to the use of VectorUDT injected by VectorAssembler. Running time of certain steps however jumps from a few minutes to over an hour when ParquetCachedBatchSerializer is enabled vs disabled, with spark-rapids plugin enabled in both cases. Attempted to reproduce in a more self-contained manner per below code snippet that incorporates some of the relevant logic from CrossValidator and XGboost.

Steps/Code to reproduce bug

import org.apache.spark.ml.feature.VectorAssembler
import org.apache.spark.sql.functions._
import org.apache.spark.sql.types._
val df = spark.range(0, 10000000).toDF("col_1")
val df2 = df.withColumn("rand1",rand()).withColumn("rand2",rand()).withColumn("rand3",rand())
val va = new VectorAssembler().setInputCols(Array("rand1","rand2","rand3")).setOutputCol("vector")
val df3 = va.transform(df2).withColumn("filter",rand()).filter($"filter" < 0.5)
df3.cache()
val df4 = df3.repartition(2)
df4.count

In my environment, this bit of code takes a few seconds to run in spark-shell with ParquetCachedBatchSerializer disabled but almost 2 min when enabled.

Another issue with this example is that if the line val df3 = ... is replaced with val df3 = df2.withColumn("filter",rand()).filter($"filter" < 0.5) (i.e. no VectorUDT column added), an Array index out of bounds exception is encountered with ParquetCachedBatchSerializer enabled, while no error with it disabled.

A pyspark version of the above example shows similar behavior.

Expected behavior Much smaller performance penalty with ParquetCachedBatchSerializer enabled in this example, which should resolve the main issue encountered with pyspark CrossValidator.

Environment details (please complete the following information)

WeichenXu123 commented 2 years ago

I guess it probably be issue in ParquetCachedBatchSerializer? Is it relates to xgboost pyspark integration code ?

eordentlich commented 2 years ago

It is not specific to the xgboost pyspark code. Just happened to encounter the issue when trying that.

WeichenXu123 commented 2 years ago

It is not specific to the xgboost pyspark code. Just happened to encounter the issue when trying that.

But happy to see you tried the xgboost pyspark code. If you found any performance issue pls reported to me. Thanks!