uber-common / jvm-profiler

JVM Profiler Sending Metrics to Kafka, Console Output or Custom Reporter
Other
1.78k stars 342 forks source link

How to build command when using jvm-profiler with spark to send executor's Stacktrack Profiling and ioProfiling #37

Closed HurleyWu closed 5 years ago

HurleyWu commented 5 years ago

I'm trying to use jvm-profiler to monitor the spark's executor on yarn. I find the jvm-profiler is very helpful. But I'v still received jvm's memory debug info when execute command below

--conf spark.jars=hdfs://SERVICE-HADOOP-ff0917a859de41be8d3371e3f64b7b9f/lib/jvm-profiler-1.0.0.jar --conf spark.executor.extraJavaOptions=-javaagent:jvm-profiler-1.0.0.jar=reporter=com.uber.profiling.reporters.KafkaOutputReporter,sampleInterval=3000,brokerList=awaken122:9092,topicPrefix=profiler_

Acutally, I want to get Stacktrack Profiling' to build FlameGraph. Could you help me to build the right command?

felixcheung commented 5 years ago

hi - looks like you have sampleInterval - what's the problem you are running into with flamegraph?

g1thubhub commented 5 years ago

@felixcheung @HurleyWu I blogged about analyzing Spark jobs using Uber's JVM profiler among other tools and created a helper library here https://g1thubhub.github.io/4-bigdata-riddles https://github.com/g1thubhub/phil_stopwatch

A sample command used for collecting stacktraces for Spark executors and the driver was

spark-submit --deploy-mode cluster \ 
--class your.Class \
--conf spark.jars=s3://your/bucket/jvm-profiler-1.0.0.jar \
--conf spark.driver.extraJavaOptions=-javaagent:jvm-profiler-1.0.0.jar=sampleInterval=2000,metricInterval=1000 \
--conf spark.executor.extraJavaOptions=-javaagent:jvm-profiler-1.0.0.jar=sampleInterval=2000,metricInterval=1000 \
s3://path/to/your/project.jar 

... and then I used my library (phil_stopwatch) to extract the stacktraces from the standard out of the spark executors