palantir / spark-influx-sink

A Spark metrics sink that pushes to InfluxDb
Apache License 2.0
51 stars 17 forks source link

executor in yarn mode : ClassNotFoundException: org.apache.spark.metrics.sink.InfluxDbSink #1

Closed thunderstumpges closed 7 years ago

thunderstumpges commented 7 years ago

Hello,

I stumbled on this project which appears quite new but is just what I'm looking for. I have followed the instructions on the readme for the most part. Here is my setup:

I DO see the metrics library load in the driver in yarn (cluster mode). I added a few info log statements when the reporter starts up, and I see them in my container log for the driver. I also see metrics for the driver in InfluxDB so I know the reporter is loading and reporting OK (at least in the driver)

However, the executor containers are getting the following exception:

17/04/10 16:08:46 ERROR metrics.MetricsSystem: Sink class org.apache.spark.metrics.sink.InfluxDbSink cannot be instantiated
Exception in thread "main" java.lang.reflect.UndeclaredThrowableException
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
        at org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:66)
        at org.apache.spark.executor.CoarseGrainedExecutorBackend$.run(CoarseGrainedExecutorBackend.scala:188)
        at org.apache.spark.executor.CoarseGrainedExecutorBackend$.main(CoarseGrainedExecutorBackend.scala:284)
        at org.apache.spark.executor.CoarseGrainedExecutorBackend.main(CoarseGrainedExecutorBackend.scala)
Caused by: java.lang.ClassNotFoundException: org.apache.spark.metrics.sink.InfluxDbSink
        at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:348)
        at org.apache.spark.util.Utils$.classForName(Utils.scala:229)
        at org.apache.spark.metrics.MetricsSystem$$anonfun$registerSinks$1.apply(MetricsSystem.scala:198)
        at org.apache.spark.metrics.MetricsSystem$$anonfun$registerSinks$1.apply(MetricsSystem.scala:194)
        at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
        at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
        at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
        at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
        at scala.collection.mutable.HashMap.foreach(HashMap.scala:99)
        at org.apache.spark.metrics.MetricsSystem.registerSinks(MetricsSystem.scala:194)
        at org.apache.spark.metrics.MetricsSystem.start(MetricsSystem.scala:102)
        at org.apache.spark.SparkEnv$.create(SparkEnv.scala:364)
        at org.apache.spark.SparkEnv$.createExecutorEnv(SparkEnv.scala:200)
        at org.apache.spark.executor.CoarseGrainedExecutorBackend$$anonfun$run$1.apply$mcV$sp(CoarseGrainedExecutorBackend.scala:223)
        at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:67)
        at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:66)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)
        ... 4 more

I can see that my jar is being added to the front of the classpath as well, looking at the driver output of its "executor launch context" (reindex-spark-job-0.3-SNAPSHOT-all.jar is my uber-jar):

17/04/10 16:09:26 INFO yarn.ApplicationMaster:
===============================================================================
YARN executor launch context:
  env:
    SPARK_YARN_USER_ENV -> PYSPARK_PYTHON=/opt/rh/rh-python35
    CLASSPATH -> reindex-spark-job-0.3-SNAPSHOT-all.jar<CPS>{{PWD}}<CPS>{{PWD}}/__spark_conf__<CPS>{{PWD}}/__spark_libs__/*<CPS>$HADOOP_CLIENT_C
ONF_DIR<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/*<CPS>$HADOOP_COMMON_HOME/lib/*<CPS>$HADOOP_HDFS_HOME/*<CPS>$HADOOP_HDFS_HOME/lib/*<CPS>$HA
DOOP_YARN_HOME/*<CPS>$HADOOP_YARN_HOME/lib/*<CPS>$HADOOP_MAPRED_HOME/*<CPS>$HADOOP_MAPRED_HOME/lib/*<CPS>$MR2_CLASSPATH<CPS>/etc/hadoop/conf:/op
t/cloudera/parcels/CDH-5.6.1-1.cdh5.6.1.p0.3/lib/hadoop/libexec/../../hadoop/lib/*:/opt/cloudera/parcels/CDH-5.6.1-1.cdh5.6.1.p0.3/lib/hadoop/li
bexec/../../hadoop/.//*:/opt/cloudera/parcels/CDH-5.6.1-1.cdh5.6.1.p0.3/lib/hadoop/libexec/../../hadoop-hdfs/./:/opt/cloudera/parcels/CDH-5.6.1-
1.cdh5.6.1.p0.3/lib/hadoop/libexec/../../hadoop-hdfs/lib/*:/opt/cloudera/parcels/CDH-5.6.1-1.cdh5.6.1.p0.3/lib/hadoop/libexec/../../hadoop-hdfs/
.//*:/opt/cloudera/parcels/CDH-5.6.1-1.cdh5.6.1.p0.3/lib/hadoop/libexec/../../hadoop-yarn/lib/*:/opt/cloudera/parcels/CDH-5.6.1-1.cdh5.6.1.p0.3/
lib/hadoop/libexec/../../hadoop-yarn/.//*:/opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/lib/*:/opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/./
/*
    SPARK_DIST_CLASSPATH -> /etc/hadoop/conf:/opt/cloudera/parcels/CDH-5.6.1-1.cdh5.6.1.p0.3/lib/hadoop/libexec/../../hadoop/lib/*:/opt/cloudera
/parcels/CDH-5.6.1-1.cdh5.6.1.p0.3/lib/hadoop/libexec/../../hadoop/.//*:/opt/cloudera/parcels/CDH-5.6.1-1.cdh5.6.1.p0.3/lib/hadoop/libexec/../..
/hadoop-hdfs/./:/opt/cloudera/parcels/CDH-5.6.1-1.cdh5.6.1.p0.3/lib/hadoop/libexec/../../hadoop-hdfs/lib/*:/opt/cloudera/parcels/CDH-5.6.1-1.cdh
5.6.1.p0.3/lib/hadoop/libexec/../../hadoop-hdfs/.//*:/opt/cloudera/parcels/CDH-5.6.1-1.cdh5.6.1.p0.3/lib/hadoop/libexec/../../hadoop-yarn/lib/*:
/opt/cloudera/parcels/CDH-5.6.1-1.cdh5.6.1.p0.3/lib/hadoop/libexec/../../hadoop-yarn/.//*:/opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/lib/*:/
opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/.//*
    SPARK_YARN_STAGING_DIR -> hdfs://nameservice1/user/tstumpges/.sparkStaging/application_1480652198027_0357
    SPARK_USER -> tstumpges
    SPARK_YARN_MODE -> true
    PYSPARK_PYTHON -> /opt/rh/rh-python35

  command:
    {{JAVA_HOME}}/bin/java \
      -server \
      -Xmx4096m \
      '-Ddconfig.consul.keyStores=global,dev,host/hdpdev-01.cb.ntent.com,jobs/test-job/global,jobs/test-job/dev,jobs/test-job/host/hdpdev-01.cb.ntent.com' \
      -Djava.io.tmpdir={{PWD}}/tmp \
      -Dspark.yarn.app.container.log.dir=<LOG_DIR> \
      -XX:OnOutOfMemoryError='kill %p' \
      org.apache.spark.executor.CoarseGrainedExecutorBackend \
      --driver-url \
      spark://CoarseGrainedScheduler@10.0.126.196:47361 \
      --executor-id \
      <executorId> \
      --hostname \
      <hostname> \
      --cores \
      2 \
      --app-id \
      application_1480652198027_0357 \
      --user-class-path \
      file:$PWD/__app__.jar \
      1><LOG_DIR>/stdout \
      2><LOG_DIR>/stderr

  resources:
    __app__.jar -> resource { scheme: "hdfs" host: "nameservice1" port: -1 file: "/user/tstumpges/.sparkStaging/application_1480652198027_0357/r
eindex-spark-job-0.3-SNAPSHOT-all.jar" } size: 191145201 timestamp: 1491865690837 type: FILE visibility: PRIVATE
    __spark_libs__ -> resource { scheme: "hdfs" host: "nameservice1" port: -1 file: "/user/tstumpges/.sparkStaging/application_1480652198027_035
7/__spark_libs__1980285003983078799.zip" } size: 197750381 timestamp: 1491865671716 type: ARCHIVE visibility: PRIVATE
    spark-metrics.properties -> resource { scheme: "hdfs" host: "nameservice1" port: -1 file: "/user/tstumpges/spark-metrics.properties" } size:
 301 timestamp: 1491862976743 type: FILE visibility: PUBLIC
    __spark_conf__ -> resource { scheme: "hdfs" host: "nameservice1" port: -1 file: "/user/tstumpges/.sparkStaging/application_1480652198027_035
7/__spark_conf__.zip" } size: 33484 timestamp: 1491865690906 type: ARCHIVE visibility: PRIVATE

===============================================================================

Any thoughts or suggestions? I can't see why it would find the class and load fine in the driver, but not the executor. All my other application classes are loading just fine and I can see the InfluxDbSink class in my uber-jar.

Thanks!

thunderstumpges commented 7 years ago

Follow up here, I was able to get things working by adding this jar and the metrics-influxdb jar individually instead of through my uber-jar. I really don't understand why the first approach didn't work (and only for executors) but anyway, I seem to have it working (mostly)

I am only getting a very few of the metrics. (CodeGenerator, HiveExternalCatalog, and a couple base metrics) and I'm missing streaming source metrics, and (I think) a bunch of others. Looking into that now. But this seems to be resolved.

tstearns commented 7 years ago

Glad you got it working. We tend to use YARN client-mode for apps that use this library, so it's possible that there are nuances to how the individual JARs are shipped (I wouldn't be surprised). For what it's worth, we also add these JARs individually to Spark's classpath rather than through uber-jars.