Open ayush-chauhan opened 5 years ago
@ayush-chauhan Sorry to hear that. You can use sparklens in offline mode using the event log history files.
./bin/spark-submit --packages qubole:sparklens:0.2.0-s_2.11
--class com.qubole.sparklens.app.ReporterApp qubole-dummy-arg
/tmp/spark-history/application_1520833877547_0285.lz4 source=history
If it is not too big, can you share the event log file. Will help in understanding the root cause.
Sorry, there was some issue with my code. I was using multithreading in my code to merge incremental code in parallel. This issue was fixed after I corrected my code.
I have one question though, why sparklens metrics are not useful in the case of multithreading?
@ayush-chauhan The way sparklens works right now if that it computes the time spent in the driver by subtracting the time spent during the jobs processing from the total job duration. With multithreading, it is hard to define the notion of driver time. Also multithreading in driver is usually accompanied by use of fair-scheduler in spark. We don't have the ability to simulate fair-scheduler right now. Short answer is that it becomes lot harder to understand the application as well as simulate it when we add these additional degrees of freedom.
Getting this error which leads to my job failure. This exception leads to this error - org.apache.spark.SparkException: Job 33 canceled because SparkContext was shut down
This is the command I am using
spark-submit --jars /home/hadoop/ayush/sparklens_2.11-0.2.0.jar --conf spark.extraListeners=com.qubole.sparklens.QuboleJobListener --class com.oyo.spark.application.MergeIncrementalData --master yarn --deploy-mode cluster --queue ingestion /home/hadoop/jp/application-0.0.1-SNAPSHOT/application-0.0.1-SNAPSHOT-jar-with-dependencies.jar prod ingestiondb.bookings