LucaCanali / sparkMeasure

This is the development repository for sparkMeasure, a tool and library designed for efficient analysis and troubleshooting of Apache Spark jobs. It focuses on easing the collection and examination of Spark metrics, making it a practical choice for both developers and data engineers.
Apache License 2.0
690 stars 144 forks source link

Flight Recorder Mode when Driver crashed from OOM #46

Closed maytasm closed 1 year ago

maytasm commented 1 year ago

Hi,

I am wondering if you have any workaround or recommendation for using flight recorder mode when the driver can crashed from OOM. When the driver crashed from OOM, the Listener would never received onApplicationEnd and no metrics would be written to the sink. Ideally, we would still want to know all the metrics of the jobs accumulated right before it crashed.

LucaCanali commented 1 year ago

Hi,

Flight recorder mode with file output currently has a very simple implementation, where all metrics are buffered into the driver memory and will only be written out as the application finishes, which indeed is a problem if youhave a driver crash in between.

You may want to have a looks the Filght recorder mode with InfluxDB sink and/or Kafka sink, to overcome those issues: https://github.com/LucaCanali/sparkMeasure/blob/master/docs/Flight_recorder_mode_InfluxDBSink.md https://github.com/LucaCanali/sparkMeasure/blob/master/docs/Flight_recorder_mode_KafkaSink.md

You may also want to check the SPark dashboard project https://github.com/cerndb/spark-dashboard

Best,
Luca

maytasm commented 1 year ago

@LucaCanali Thanks for the reply. Even if Filght recorder is using InfluxDB sink and/or Kafka sink, it would not be able to report metrics of the latest / on-going task/stage/executor at the time of the crash right? One solution might be to have some sort of monitoring done outside of Spark Driver? Do you know of anything like that or is there any other workaround I might have missed? Thanks!

LucaCanali commented 1 year ago

If your goal is to investigate OOMs in the Spark driver, sparkMeasure is not the best tool. Spark metrics instrumentation has metrics for Java memory usage on the driver (and executors), which may be useful for your case, see https://github.com/LucaCanali/Miscellaneous/tree/master/Spark_Dashboard Or else you can try using JAVA instrumentation like JFR, or debugging tools like https://github.com/jvm-profiling-tools/async-profiler