Closed LIN-Yu-Ting closed 12 months ago
Have you tried not placing the RAPIDS Accelerator jar in the Spark jars directory and instead of specifying the driver/executor classpath via the command-line and configs, use the --jars flag instead? e.g.:
spark330/sbin/start-thriftserver.sh --jars rapids-4-spark_2.12-23.10.0.jar --conf spark.plugins=com.nvidia.spark.SQLPlugin --conf spark.rapids.sql.explain=ALL --master spark://sparkhost:7077
This worked for me, I was able to show tables and select elements from a table using pyhive to connect to the Spark thriftserver and verified via the Hive thiftserver log that the RAPIDS Accelerator was being used during the queries.
I have tried to reproduce as well with different modalities for jar submission. So far I have not been able to.
You can not use official Spark package. Instead, Spark with Thrift Server package needs to be compiled as reference
This looks like an outdated note, hive-thriftserver
profile is enabled in standard builds
$ cat ~/dist/spark-3.5.0-bin-hadoop3/RELEASE
Spark 3.5.0 (git revision ce5ddad9903) built for Hadoop 3.3.4
Build flags: -B -Pmesos -Pyarn -Pkubernetes -Psparkr -Pscala-2.12 -Phadoop-3 -Phive -Phive-thriftserver
Copy rapids-4-spark.jar to $SPARK_HOME/jars/rapids-4-spark.jar.
This has worked for me too, but this is my least-favorite deployment option. It typically is only required with a standalone mode (your case) but only when the RapidsShuffleManager is used as well (not present in your conf). It does not look like the case here but mixing --jars
(spark.jars
) while the jar is in $SPARK_HOME/jars
may cause issues #5758 with this symptom. spark.jars is preferable.
--driver-class-path $SPARK_HOME/jars/rapids-4-spark.jar
this is not necessary when you already placed your jar under $SPARK_HOME/jars
In your setup it looks cleanest if you remove the jar from $SPARK_HOME/jars and start thriftserver with --jars.
Hi @gerashegalov @jlowe. Thanks for your helps. Here is a youtube link which shows how I encountered this error.
Furthermore, I am using Spark 3.3.0 instead of newest spark version 3.5.0 so that I need recompile my spark package. I can try with spark 3.5.0 later.
Thanks for the demo @LIN-Yu-Ting. I was using beeline
to connect to the thriftserver.
Can you check if $SPARK_HOME/beeline
works for you? Maybe the issue originates in Superset?
Standard Spark build for 3.3.0 works with beeline for me. And again I am not sure why you need to recompile Spark for hivethriftserver. It should already be there.
cat ~/dist/spark-3.3.0-bin-hadoop3/RELEASE
Spark 3.3.0 (git revision f74867bddf) built for Hadoop 3.3.2
Build flags: -B -Pmesos -Pyarn -Pkubernetes -Psparkr -Pscala-2.12 -Phadoop-3 -Phive -Phive-thriftserver
At any rate, can you provide your exact build command to double-check if it is about custom build?
@gerashegalov I have tried both beeline and pyhive package and as you said it is able to execute SQL query without exception.
However, when I execute SQL query using Superset through PyHive then I got the above exception, which is quite weird.
@jlowe @gerashegalov I got more information from logs of Spark Thrift Server which might give us more insight. Actually, error occurs when superset is executing a command
SHOW VIEWS IN `table`
Can you please try from your side to execute this command to see whether you can reproduce errors or not ? Thanks a lot.
07:17:30.296 WARN HiveConf - HiveConf of name hive.server2.thrift.http.bind.host does not exist
07:19:46.238 WARN HiveConf - HiveConf of name hive.server2.thrift.http.bind.host does not exist
07:19:46.239 INFO DAGScheduler - Asked to cancel job group e36c180c-0f3e-423c-8226-319b29bb656a
07:19:46.239 INFO SparkExecuteStatementOperation - Close statement with e36c180c-0f3e-423c-8226-319b29bb656a
07:19:46.810 WARN HiveConf - HiveConf of name hive.server2.thrift.http.bind.host does not exist
07:19:46.810 INFO SparkExecuteStatementOperation - Submitting query 'SHOW VIEWS IN singlecellrnaonedb' with 47d611b8-1fb1-47a4-a49d-484313c8c2b7
07:19:46.811 INFO SparkExecuteStatementOperation - Running query with 47d611b8-1fb1-47a4-a49d-484313c8c2b7
07:19:46.815 ERROR GpuOverrideUtil - Encountered an exception applying GPU overrides java.lang.BootstrapMethodError: java.lang.NoClassDefFoundError: com/nvidia/spark/rapids/RuleNotFoundRunnableCommandMeta
java.lang.BootstrapMethodError: java.lang.NoClassDefFoundError: com/nvidia/spark/rapids/RuleNotFoundRunnableCommandMeta
Note: I have tried to execute
SHOW VIEWS IN `table`
from my side to either beeline or PyHive. All are executable. Only SQL query from superset will fail.
Might the error be possibly generated by multiple sessions in Spark Thrift JDBC server ?
It seems that Superset will generate multiple sessions to Spark Thrift Server as following SparkUI shown.
When query is sent by either beeline or PyHive package, then only one session exists in Spark Thrift Server. In this case, no matter what SQL command you are executing, everything is fine.
We have seen issues with multisession notebooks such as Databricks. I will give a try on Superset, not familiar with it yet.
@LIN-Yu-Ting another suggestion to quickly unblock while we are looking at it.
Classloading issues are likely to go away if you build our artifact from scratch using the instructions for a single-spark-version build
To this end check out or download the source for the version tag.
In your case the Apache Spark version you want to build for is 3.3.0 which can be accomplished by running from local repo's root dir:
mvn package -pl dist -am -Dbuildver=330 -DallowConventionalDistJar=true -DskipTests
Since the tests are skipped you do not need a GPU on the machine used for the build.
The artifact will be under: dist/target/rapids-4-spark_2.12-<version>-cuda<cuda.version>.jar
@gerashegalov Thanks for providing this workaround. I have tried to build locally and replace the jar. However, unfortunately, I still got the same error as before. Anyway, I appreciate.
@LIN-Yu-Ting Can you double-check that your jar is indeed "conventional"?
The output from running this command below should be 0
$ jar tvf dist/target/rapids-4-spark_2.12-23.12.0-SNAPSHOT-cuda11.jar | grep -c spark3xx
0
@gerashegalov Here is the printscreen after executing this command:
with the new jar you built can you place it back into the $SPARK_HOME/jars/ directory and try it there? remove it from the --jars parameter.
Thanks a lot for confirming @LIN-Yu-Ting.
Can we try one more thing? Can you start the thrift server with additional params to enable verbose classloading: --driver-java-options=-verbose:class --conf spark.executor.extraJavaOptions=-verbose:class
and grep the the thrift server / Driver log for rapids-4-spark jar to rule out additional jars on the classpath
$ grep -o 'Loaded.*rapids-4-spark.*\.jar*' $SPARK_HOME/logs/spark-gshegalov-org.apache.spark.sql.hive.thriftserver.HiveThriftServer2-1-gshegalov-dual-5760.out | cut -d: -f 3 | sort -u
/some/path/rapids-4-spark_2.12-23.10.0-cuda11.jar
In branch-24.02 we also have a new feature of detecting duplicate jars automatically #9654. You may want to try this https://github.com/NVIDIA/spark-rapids/issues/9867#issuecomment-1832805896 again but with HEAD of branch-24.02. You can try the default but better add --conf spark.rapids.sql.allowMultipleJars=NEVER
to the thrift server conf.
I was able to reproduce this NoClassDefFoundError
.
I confirmed that it goes away with the simple -DconventionalDistJar=true
build when it is used in a static classpath
$SPARK_HOME/jar
or via --driver-class-path
/ spark.executor.extraClassPath
Even with the simple jar we have NoClassDefFoundError
if the jar is passed via --jars
While the workaround for NoClassDefFoundError
is correct, there are some org.apache.thrift.transport.TTransportException
for metadata queries but I see them with the CPU Spark as well. But generally I can run SQL queries in SQL Lab fine.
I confirmed that with simple -DallowConventionalDistJar=true build and with a static classpath $SPARK_HOME/jar, there is no NoClassDefFoundError anymore.
Thanks a lot @gerashegalov.
Going back to our multi-shim production jar, It looks like there is a race condition that affects all the sessions if the user classloader deployment --jars is used.
After superset
sessions make GpuOverrides throw NoClassDeFoundErrors, connecting a single beeline
session reproduces the same NoClassDeFoundError. The good news is that the reverse is also true. After the HiveThriftServer2 is "pre-warmed" with a single session from beeline, superset's metadata queries start succeeding without NoClassDefFoundError.
So another workaround is to run a single session beeline
$SPARK_HOME/bin/beeline -u jdbc:hive2://localhost:10000 -e 'USE `default`; SHOW FUNCTIONS; SHOW SCHEMAS; SHOW TABLES IN `default`; SHOW VIEWS IN `default`;'
before allowing traffic to the thrift server from superset.
We should review the usage of lazy vals
Describe the bug
Our objective is to activate Spark Rapids (SQLPlugin) with Spark Thrift Server. However, we encountered some exception related to ClassNotFound. For your reference, Spark Thrift Server is also known as Distributed SQL Engine.
Steps/Code to reproduce bug
You need to launch Spark Thrift Server with $SPARK_HOME/sbin/start-thriftserver.sh with following steps:
Expected behavior
Under folder $SPARK_HOME/logs, you will see a log related to Spark Thrift Server with following exception:
Environment details (please complete the following information)
Additional context
These exceptions only happen with Thrift Server. With the same configurations, I am able to launch spark-shell and execute whatever sql commands that I want.