uber-common / jvm-profiler

JVM Profiler Sending Metrics to Kafka, Console Output or Custom Reporter
Other
1.78k stars 342 forks source link

jvm profiler parameter ignored #63

Closed sashasami03 closed 4 years ago

sashasami03 commented 4 years ago

I am trying to use uber jvm profiler to profile my spark application (spark 2.4, running on emr 5.21)

Following is my cluster configuration

          [
             {
                "classification": "spark-defaults",
                "properties": {
                   "spark.executor.memory": "38300M",
                   "spark.driver.memory": "38300M",
                   "spark.yarn.scheduler.reporterThread.maxFailures": "5",
                   "spark.driver.cores": "5",
                   "spark.yarn.driver.memoryOverhead": "4255M",
                   "spark.executor.heartbeatInterval": "60s",
                   "spark.rdd.compress": "true",
                   "spark.network.timeout": "800s",
                   "spark.executor.cores": "5",
                   "spark.memory.storageFraction": "0.27",
                   "spark.speculation": "true",
                   "spark.sql.shuffle.partitions": "200",
                   "spark.shuffle.spill.compress": "true",
                   "spark.shuffle.compress": "true",
                   "spark.storage.level": "MEMORY_AND_DISK_SER",
                   "spark.default.parallelism": "200",
                   "spark.serializer": "org.apache.spark.serializer.KryoSerializer",
                   "spark.memory.fraction": "0.80",
                   "spark.executor.extraJavaOptions": "-XX:+UseG1GC   -XX:InitiatingHeapOccupancyPercent=35 -XX:OnOutOfMemoryError='kill -9 %p'",
                   "spark.executor.instances": "107",
                   "spark.yarn.executor.memoryOverhead": "4255M",
                   "spark.dynamicAllocation.enabled": "false",
                   "spark.driver.extraJavaOptions": "-XX:+UseG1GC  -XX:InitiatingHeapOccupancyPercent=35 -XX:OnOutOfMemoryError='kill -9 %p'"
                   },
                "configurations": []
            },
            {
                "classification": "yarn-site",
                "properties": {
                   "yarn.log-aggregation-enable": "true",
                   "yarn.nodemanager.pmem-check-enabled": "false",
                   "yarn.nodemanager.vmem-check-enabled": "false"
                },
                "configurations": []
            },
            {
                "classification": "spark",
                "properties": {
                   "maximizeResourceAllocation": "true",
                   "spark.sql.broadcastTimeout": "-1"
                 },
                 "configurations": []
            },
            {
                 "classification": "emrfs-site",
                 "properties": {
                     "fs.s3.threadpool.size": "50",
                     "fs.s3.maxConnections": "5000"
                  },
                  "configurations": []
            },
            {
                  "classification": "core-site",
                  "properties": {
                     "fs.s3.threadpool.size": "50",
                     "fs.s3.maxConnections": "5000"
                   },
                   "configurations": []
             }

    ]

The profiler jar is stored in s3 (mybucket/profilers/jvm-profiler-1.0.0.jar). While bootstrapping my core and master nodes, I run the following bootstrap script

     sudo mkdir -p /tmp
     aws s3 cp s3://mybucket/profilers/jvm-profiler-1.0.0.jar /tmp/

I submit my emr step as follows

       spark-submit --deploy-mode cluster --master=yarn ......(other parameters).........
       --conf spark.jars=/tmp/jvm-profiler-1.0.0.jar --conf spark.driver.extraJavaOptions=-javaagent:jvm-profiler-1.0.0.jar=reporter=com.uber.profiling.reporters.ConsoleOutputReporter,metricInterval=5000 --conf spark.executor.extraJavaOptions=-javaagent:jvm-profiler-1.0.0.jar=reporter=com.uber.profiling.reporters.ConsoleOutputReporter,metricInterval=5000

But I am unable to see the profiling related output in the logs (checked both stdout and stderr logs for all containers). Is the parameter ignored ? Am I missing something ? Is there something else I could check to see why this parameter is being ignored ?

sashasami03 commented 4 years ago

Have asked the same on stack: https://stackoverflow.com/questions/59233394/how-to-pass-javaagent-to-emr-spark-applications