linkedin / dr-elephant

Dr. Elephant is a job and flow-level performance monitoring and tuning tool for Apache Hadoop and Apache Spark
Apache License 2.0
1.35k stars 858 forks source link

Support for Spark 2.3/2.4 in Dr.Elephant #683

Open ShubhamGupta29 opened 4 years ago

ShubhamGupta29 commented 4 years ago

Currently, Dr.Elephant at max supports Spark 2.2.3. We need to support the latest versions of Spark(at least 2.3 and 2.4). This needs several changes, will update the issue as proceeds further.

mareksimunek commented 4 years ago

@ShubhamGupta29 Just FYI: I managed to run https://github.com/songgane/dr-elephant/tree/feature/support_spark_2.x and it worked with spark 2.3+. There is couple of tests which needs to be fixed (I skipped them).

I have question:

  1. It doesn't show executor memory used Screenshot 2020-04-20 at 17 27 21

  2. Same for GC statistics Screenshot 2020-04-20 at 17 23 57

Is it because in SHS 2.3 it's not available or there is some work needed to see these metrics.

Anyway glad to see progress for 2.3+ spark.

ShubhamGupta29 commented 4 years ago

@mareksimunek I will surely go through your changes. Just some questions:

For your Heuristics related issues, I need to check how are you retrieving and transforming data

mareksimunek commented 4 years ago
  1. I used hadoop 2.3.0 and spark 2.1.2 https://github.com/songgane/dr-elephant/blob/feature/support_spark_2.x/compile.conf

I think, I tried rebase to current master and with higher versions there were more failing tests so I stick with 2.1 and skipped less tests :).

  1. I am fetching with:
    <fetcher>
    <applicationtype>spark</applicationtype>
    <classname>com.linkedin.drelephant.spark.fetchers.SparkFetcher</classname>
    <params>
      <use_rest_for_eventlogs>true</use_rest_for_eventlogs>
      <should_process_logs_locally>true</should_process_logs_locally>
      <event_log_location_uri>/spark2-history/</event_log_location_uri>
      <spark_log_ext>.snappy</spark_log_ext>
    </params>
    </fetcher>
ShubhamGupta29 commented 4 years ago

@mareksimunek the issue you mentioned It doesn't show executor memory used, is it for every job or for some jobs value is available?

mareksimunek commented 4 years ago

@ShubhamGupta29 every spark job, I suspect it's beacuse of this. https://github.com/songgane/dr-elephant/blame/feature/support_spark_2.x/app/org/apache/spark/deploy/history/SparkDataCollection.scala#L178

That info.memUsed is only available when the job is running, but I am not sure if Dr.Elephant is fetching this information. And when the job ended the information is gone. Because when I check SHS of completed job there is everywhere Peak memory: 0.

Spark history verison : 2.3.0.2.6.5.0-292 (its 2.3 with some HDP patches)

xglv1985 commented 4 years ago

@ShubhamGupta29 every spark job, I suspect it's beacuse of this. https://github.com/songgane/dr-elephant/blame/feature/support_spark_2.x/app/org/apache/spark/deploy/history/SparkDataCollection.scala#L178

That info.memUsed is only available when the job is running, but I am not sure if Dr.Elephant is fetching this information. And when the job ended the information is gone. Because when I check SHS of completed job there is everywhere Peak memory: 0.

Spark history verison : 2.3.0.2.6.5.0-292 (its 2.3 with some HDP patches)

@mareksimunek Hello, buddy, I met the same case as yours: mem/executor/storage info can not be fetched from SHS once the job end. Have your problem solved? Thanks.

ShubhamGupta29 commented 4 years ago

@mareksimunek @xglv1985 need one help from you guysin debugging the issue, can you confirm if the value of memUsed is Non-Zero in the response for REST API endpoint [/executors].

mareksimunek commented 4 years ago

@ShubhamGupta29 yep its zero. Checked http://someHost:18081/api/v1/applications/application_1587409317223_1104/1/executors

[ {
    "id" : "driver",
    "hostPort" : "someHost:37121",
    "isActive" : true,
    "rddBlocks" : 0,
    "memoryUsed" : 0,
    "diskUsed" : 0,
    "totalCores" : 0,
    "maxTasks" : 0,
    "activeTasks" : 0,
    "failedTasks" : 0,
    "completedTasks" : 0,
    "totalTasks" : 0,
    "totalDuration" : 0,
    "totalGCTime" : 0,
    "totalInputBytes" : 0,
    "totalShuffleRead" : 0,
    "totalShuffleWrite" : 0,
    "isBlacklisted" : false,
    "maxMemory" : 407057203,
    "addTime" : "2020-04-25T21:08:51.911GMT",
    "executorLogs" : {
      "stdout" : "http://someHost:8042/node/containerlogs/container_e54_1587409317223_1104_01_000001/fulltext/stdout?start=-4096",
      "stderr" : "http://someHost:8042/node/containerlogs/container_e54_1587409317223_1104_01_000001/fulltext/stderr?start=-4096"
    },
    "memoryMetrics" : {
      "usedOnHeapStorageMemory" : 0,
      "usedOffHeapStorageMemory" : 0,
      "totalOnHeapStorageMemory" : 407057203,
      "totalOffHeapStorageMemory" : 0
    }
  }, {
    "id" : "9",
    "hostPort" : "someHost2.dev.dszn.cz:33108",
    "isActive" : true,
    "rddBlocks" : 0,
    "memoryUsed" : 0,
    "diskUsed" : 0,
    "totalCores" : 3,
    "maxTasks" : 3,
    "activeTasks" : 0,
    "failedTasks" : 0,
    "completedTasks" : 56,
    "totalTasks" : 56,
    "totalDuration" : 846816,
    "totalGCTime" : 31893,
    "totalInputBytes" : 0,
    "totalShuffleRead" : 661719258,
    "totalShuffleWrite" : 747129542,
    "isBlacklisted" : false,
    "maxMemory" : 3032481792,
    "addTime" : "2020-04-25T21:09:08.100GMT",
    "executorLogs" : {
      "stdout" : "http://someHost2.dev.dszn.cz:8042/node/containerlogs/container_e54_1587409317223_1104_01_000011/fulltext/stdout?start=-4096",
      "stderr" : "http://someHost2.dev.dszn.cz:8042/node/containerlogs/container_e54_1587409317223_1104_01_000011/fulltext/stderr?start=-4096"
    },
    "memoryMetrics" : {
      "usedOnHeapStorageMemory" : 0,
      "usedOffHeapStorageMemory" : 0,
      "totalOnHeapStorageMemory" : 3032481792,
      "totalOffHeapStorageMemory" : 0
    }
  }.....

Correction, its even reporting zero memoryUsed on running job trough SHS rest API. Should I set something to spark.executor.extraJavaOptions? To show me these stats?

From MR its getting memory stats from this setting. Am I right? mapreduce.task.profile.params= -agentlib:hprof=cpu=samples,heap=sites,force=n,thread=y,verbose=n,file=%s

xglv1985 commented 4 years ago

@mareksimunek @xglv1985 need one help from you guysin debugging the issue, can you confirm if the value of memUsed is Non-Zero in the response for REST API endpoint [/executors].

@ShubhamGupta29 yes, I also proved it. The mem field is 0 in response json

ShubhamGupta29 commented 4 years ago

@mareksimunek @xglv1985 thanks for the prompt response, I am able to support Spark2.3 and will make the changes public soon. I am debugging this memUsed = 0 issue as this problem is still there with Spark2.3. I am debugging the issue and will be in contact with you guys. One more query I have for you, can you paste here the values you are getting for these metrics in /executor API response. Metrics: "memoryMetrics" : { "usedOnHeapStorageMemory" "usedOffHeapStorageMemory" "totalOnHeapStorageMemory" "totalOffHeapStorageMemory" }

xglv1985 commented 4 years ago

@mareksimunek @xglv1985 thanks for the prompt response, I am able to support Spark2.3 and will make the changes public soon. I am debugging this memUsed = 0 issue as this problem is still there with Spark2.3. I am debugging the issue and will be in contact with you guys. One more query I have for you, can you paste here the values you are getting for these metrics in /executor API response. Metrics: "memoryMetrics" : { "usedOnHeapStorageMemory" "usedOffHeapStorageMemory" "totalOnHeapStorageMemory" "totalOffHeapStorageMemory" }

sure:

"memoryMetrics" : { "usedOnHeapStorageMemory" : 0, "usedOffHeapStorageMemory" : 0, "totalOnHeapStorageMemory" : 1099746508, "totalOffHeapStorageMemory" : 4000000000 }

xglv1985 commented 4 years ago

@mareksimunek @xglv1985 thanks for the prompt response, I am able to support Spark2.3 and will make the changes public soon. I am debugging this memUsed = 0 issue as this problem is still there with Spark2.3. I am debugging the issue and will be in contact with you guys. One more query I have for you, can you paste here the values you are getting for these metrics in /executor API response. Metrics: "memoryMetrics" : { "usedOnHeapStorageMemory" "usedOffHeapStorageMemory" "totalOnHeapStorageMemory" "totalOffHeapStorageMemory" }

By the way, @ShubhamGupta29 I use dr.elephant to analyze spark 2.3 event log, and every job analysis result is as the follow. I found except "Spark Configuration", every field is empty. Is this normal? Thanks!

Spark Configuration Severity: Moderate [Explain]

spark.application.duration | -1587978750 Seconds spark.driver.cores | 4 spark.driver.memory | 4 GB spark.dynamicAllocation.enabled | false spark.executor.cores | 4 spark.executor.instances | 20 spark.executor.memory | 4 GB spark.shuffle.service.enabled | false Spark shuffle service is not enabled. spark.yarn.driver.memoryOverhead | 0 B spark.yarn.executor.memoryOverhead | 0 B

Spark Executor Metrics Severity: None

Executor input bytes distribution | min: 0 B, p25: 0 B, median: 0 B, p75: 0 B, max: 0 B Executor shuffle read bytes distribution | min: 0 B, p25: 0 B, median: 0 B, p75: 0 B, max: 0 B Executor shuffle write bytes distribution | min: 0 B, p25: 0 B, median: 0 B, p75: 0 B, max: 0 B Executor storage memory used distribution | min: 0 B, p25: 0 B, median: 0 B, p75: 0 B, max: 0 B Executor storage memory utilization rate | 0.000 Executor task time distribution | min: 0 sec, p25: 0 sec, median: 0 sec, p75: 0 sec, max: 0 sec Executor task time sum | 0 Total executor storage memory allocated | 1.96 GB Total executor storage memory used | 0 B

Spark Job Metrics Severity: None

Spark completed jobs count | 0 Spark failed jobs count | 0 Spark failed jobs list |   Spark job failure rate | 0.000 Spark jobs with high task failure rates

Spark Stage Metrics Severity: None

Spark completed stages count | 0 Spark failed stages count | 0 Spark stage failure rate | 0.000 Spark stages with high task failure rates |   Spark stages with long average executor runtimes

Executor GC Severity: None

GC time to Executor Run time ratio | NaN Total Executor Runtime | 0 Total GC time | 0

ShubhamGupta29 commented 4 years ago

@xglv1985 no this is not normal. Can you tell me which branch or source code are you using?

xglv1985 commented 4 years ago

@xglv1985 no this is not normal. Can you tell me which branch or source code are you using?

@ShubhamGupta29 dr-elephant_987

ShubhamGupta29 commented 4 years ago

Can you provide the link as linkedin/dr-elephant doesn't have any branch named dr-elephant_987

xglv1985 commented 4 years ago

Can you provide the link as linkedin/dr-elephant doesn't have any branch named dr-elephant_987

@ShubhamGupta29 I forked my own dr-elephant from linkedin/dr-elephant master. I only put "SparkFetcher" in my conf xml file, with true

true. Is there any other configuration that may cause these empty field? I will debug more deeply. Thanks
mareksimunek commented 4 years ago

@xglv1985 if you are using current master, you can't see any metrics from spark 2.3+ More in: https://github.com/linkedin/dr-elephant/issues/389 Check your logs there will be some parsing error. That's why i am using fork as said above.

That's why there is ongoing work from @ShubhamGupta29 to support this version.

Metrics: "memoryMetrics" : { "usedOnHeapStorageMemory" "usedOffHeapStorageMemory" "totalOnHeapStorageMemory" "totalOffHeapStorageMemory" }

Thanks for update @ShubhamGupta29 They are already included in my post https://github.com/linkedin/dr-elephant/issues/683#issuecomment-619609445

xglv1985 commented 4 years ago

@xglv1985 if you are using current master, you can't see any metrics from spark 2.3+ More in:

389

Check your logs there will be some parsing error. That's why i am using fork as said above.

That's why there is ongoing work from @ShubhamGupta29 to support this version.

Metrics: "memoryMetrics" : { "usedOnHeapStorageMemory" "usedOffHeapStorageMemory" "totalOnHeapStorageMemory" "totalOffHeapStorageMemory" }

Thanks for update @ShubhamGupta29 They are already included in my post #683 (comment)

@mareksimunek Thanks very much, I saw the same problem with mine, in link you gave. Then let's looking forward to the updated dr-elephant by @ShubhamGupta29

ShubhamGupta29 commented 4 years ago

@mareksimunek @xglv1985, I have made the changes for Spark2.3 (these are the foundation changes, will fix the tests and other cleanups in some time). If possible can you guys try this personal branch, it has changes for Spark2.3.

mareksimunek commented 4 years ago

@ShubhamGupta29 nice, the ShubhamGupta29/test23 works like a charm. It now even shows GC stats. executor memory used still not showing, but I suppose if it's not available in SHS it won't be seen in elephant. (Do you have any news if there is something to do to make it available in SHS)

Screenshot 2020-04-29 at 21 33 57

ShubhamGupta29 commented 4 years ago

@mareksimunek working on the same, after going through Spark's code got some idea of why this metric is not getting populated. For now, testing the changes and soon add those to the branch and also trying to support Spark 2.4 too. @mareksimunek and @xglv1985, can you guys fill the survey in #685, it would be helpful for us to make Dr.Elephant more OS community-friendly.

xglv1985 commented 4 years ago

ok I saw it yesterday and I will fill the survey today.

hikari

邮箱:snoopy_king2004@163.com |

Signature is customized by Netease Mail Master

On 04/30/2020 10:08, Shubham Gupta wrote:

@mareksimunek working on the same, after going through Spark's code got some idea of why this metric is not getting populated. For now, testing the changes and soon add those to the branch and also trying to support Spark 2.4 too. @mareksimunek and @xglv1985, can you guys fill the survey in #685, it would be helpful for us to make Dr.Elephant more OS community-friendly.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.

ShubhamGupta29 commented 4 years ago

@xglv1985 did you get a chance to use the changes done for Spark2.3? Feedback for the changes will make it easy to start the effort for merging the changes to the master branch for users' ease.

xglv1985 commented 4 years ago

sure, I will try your personal branch, and will feedback to you during May 1st to 5th.

At 2020-04-30 14:37:06, "Shubham Gupta" notifications@github.com wrote:

@xglv1985 did you get a chance to use the changes done for Spark2.3? Feedback for the changes will make it easy to start the effort for merging the changes to the master branch for users' ease.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.

ShubhamGupta29 commented 4 years ago

@mareksimunek and @xglv1985 I have made some more changes for Spark 2.3 support, kindly try this branch whenever you guys have time. Also for memory heuristics, there is a change needed in Spark Conf, add spark.eventLog.logBlockUpdates.enabled if not there already and make this value as true.

mareksimunek commented 4 years ago

@ShubhamGupta29 Hi, thanks for update and sorry for late response.

ShubhamGupta29 commented 4 years ago

@mareksimunek thanks for the reply and testing out the provided version.

Also, let me know any other issue you are facing or any suggestion you have for Dr.Elephant. Hope Dr.Elephant is proving useful for you and your team.

mareksimunek commented 4 years ago

@ShubhamGupta29

So far it seems it's working like a charm. I am trying to push it through in our team (now its running on small testing cluster) and with working spark metrics it will be much easier to get approval to work on that, thanks for the progress.

Question: Are you using 1 dr elephant installation per cluster or do you have 1 dr elephant analyzing more clusters.

ShubhamGupta29 commented 4 years ago

The current Dr.Elephant allow the analysis of jobs only from single RM(single cluster).

xglv1985 commented 4 years ago

@mareksimunek and @xglv1985 I have made some more changes for Spark 2.3 support, kindly try this branch whenever you guys have time. Also for memory heuristics, there is a change needed in Spark Conf, add spark.eventLog.logBlockUpdates.enabled if not there already and make this value as true.

@ShubhamGupta29 First sorry for the late response. Thanking for your branch feature_spark2.3, I now have run it up. This is my screen capture:

dr elephant

The good new is that it has more dimensions than the past versions of dr.elephant. But the detail of each dimension has disappeared, and I will double check the configuration.

xglv1985 commented 4 years ago

@mareksimunek and @xglv1985 I have made some more changes for Spark 2.3 support, kindly try this branch whenever you guys have time. Also for memory heuristics, there is a change needed in Spark Conf, add spark.eventLog.logBlockUpdates.enabled if not there already and make this value as true.

@ShubhamGupta29 First sorry for the late response. Thanking for your branch feature_spark2.3, I now have run it up. This is my screen capture:

dr elephant

The good new is that it has more dimensions than the past versions of dr.elephant. But the detail of each dimension has disappeared, and I will double check the configuration.

I have found the reason why the details disappeared. The "feature_2.3" branch use "org.avaje.ebeanorm.avaje-ebeanorm-3.2.4.jar" and "org.avaje.ebeanorm.avaje-ebeanorm-agent-3.2.2.jar", which will lead to details missing. I replaced these two jars with ''avaje-ebeanorm-3.2.2.jar" and "avaje-ebeanorm-agent-3.2.1.jar", which the old version dr.elephant depends on, and then the details came back :)

ShubhamGupta29 commented 4 years ago

@mareksimunek and @xglv1985 I have made some more changes for Spark 2.3 support, kindly try this branch whenever you guys have time. Also for memory heuristics, there is a change needed in Spark Conf, add spark.eventLog.logBlockUpdates.enabled if not there already and make this value as true.

@ShubhamGupta29 First sorry for the late response. Thanking for your branch feature_spark2.3, I now have run it up. This is my screen capture:

dr elephant

The good new is that it has more dimensions than the past versions of dr.elephant. But the detail of each dimension has disappeared, and I will double check the configuration.

I have found the reason why the details disappeared. The "feature_2.3" branch use "org.avaje.ebeanorm.avaje-ebeanorm-3.2.4.jar" and "org.avaje.ebeanorm.avaje-ebeanorm-agent-3.2.2.jar", which will lead to details missing. I replaced these two jars with ''avaje-ebeanorm-3.2.2.jar" and "avaje-ebeanorm-agent-3.2.1.jar", which the old version dr.elephant depends on, and then the details came back :)

@xglv1985 did you replace this in the ClassPath?

xglv1985 commented 4 years ago

@mareksimunek and @xglv1985 I have made some more changes for Spark 2.3 support, kindly try this branch whenever you guys have time. Also for memory heuristics, there is a change needed in Spark Conf, add spark.eventLog.logBlockUpdates.enabled if not there already and make this value as true.

@ShubhamGupta29 First sorry for the late response. Thanking for your branch feature_spark2.3, I now have run it up. This is my screen capture:

dr elephant

The good new is that it has more dimensions than the past versions of dr.elephant. But the detail of each dimension has disappeared, and I will double check the configuration.

I have found the reason why the details disappeared. The "feature_2.3" branch use "org.avaje.ebeanorm.avaje-ebeanorm-3.2.4.jar" and "org.avaje.ebeanorm.avaje-ebeanorm-agent-3.2.2.jar", which will lead to details missing. I replaced these two jars with ''avaje-ebeanorm-3.2.2.jar" and "avaje-ebeanorm-agent-3.2.1.jar", which the old version dr.elephant depends on, and then the details came back :)

@xglv1985 did you replace this in the ClassPath?

Yes, I did.

mareksimunek commented 4 years ago

@ShubhamGupta29 Hi, did you have time to look into PeakExecutionMemory ?

ShubhamGupta29 commented 4 years ago

@mareksimunek didn't get a chance to look into it but surely will do it over the weekend.

mareksimunek commented 4 years ago

@ShubhamGupta29 I know you are probably busy, but I hope you will find a way to look at it :).

ShubhamGupta29 commented 4 years ago

Hi @mareksimunek, sorry was caught up in some other tasks. I am working on an approximate value for PeakExecutionMemory as after a lot of finding I got to know that there is no way of getting this value with making changes to Spark source code. Possibly in the coming week, I will push the changes. Also, let me know if Dr.Elephant's support for Spark2.3 is working fine.

mareksimunek commented 4 years ago

@ShubhamGupta29 Support for spark 2.3 works fine :)).

RaphaelDucay commented 4 years ago

@ShubhamGupta29 First of all thanks for all your precious work in order to help adapting Dr elephant to newer spark versions. I have an hortonworks cluster HDP 2.6.5 ( == hadoop 2.7.3) running :

I have a few questions :

Thanks a lot in advance for your time !

ShubhamGupta29 commented 4 years ago

Hi @RaphaelDucay , I will try to answer you queries in the respective order:

RaphaelDucay commented 4 years ago

@ShubhamGupta29 Thanks a lot for the feeedback ! We are making a POC on both

I will keep you updated !

RaphaelDucay commented 4 years ago

@ShubhamGupta29 Ok so we made it for spark 1.6.3 For spark 2.X We are facing issues : Finally our spark 2 version is 2.3.0 We are using (as you suggested me) this branch https://github.com/ShubhamGupta29/dr-elephant/tree/feature_spark2.3 He are the logs of our failing compilation attempt

error_compile_dr-elephant_spark2 3

[warn] /dr-elephant-sources/app/org/apache/spark/deploy/history/SparkDataCollection.scala:124: abstract type pattern T is unchecked since it is eliminated by erasure [warn] seq.foreach { case (item: T) => list.add(item)} [warn] ^ [error] /dr-elephant-sources/app/org/apache/spark/status/CustomAppStatusListener.scala:628: value getPartitions is not a member of org.apache.spark.status.LiveRDD [error] liveRDD.getPartitions().foreach { case (_, part) => [error] ^ [error] /dr-elephant-sources/app/org/apache/spark/status/CustomAppStatusListener.scala:629: value executors is not a member of Any [error] part.executors.foreach { executorId => [error] ^ [error] /dr-elephant-sources/app/org/apache/spark/status/CustomAppStatusListener.scala:639: value getDistributions is not a member of org.apache.spark.status.LiveRDD [error] liveRDD.getDistributions().foreach { case (executorId, rddDist) => [error] ^ [error] /dr-elephant-sources/app/org/apache/spark/status/CustomAppStatusListener.scala:640: type mismatch; [error] found : Any [error] required: String [error] liveExecutors.get(executorId).foreach { exec => [error] ^ [warn] one warning found [error] four errors found [error] (compile:compileIncremental) Compilation failed

Do you have an idea on how to fix this ?

Thanks in advance !

yanxiaole commented 4 years ago

@mareksimunek and @xglv1985 I have made some more changes for Spark 2.3 support, kindly try this branch whenever you guys have time. Also for memory heuristics, there is a change needed in Spark Conf, add spark.eventLog.logBlockUpdates.enabled if not there already and make this value as true.

@ShubhamGupta29 First sorry for the late response. Thanking for your branch feature_spark2.3, I now have run it up. This is my screen capture:

dr elephant

The good new is that it has more dimensions than the past versions of dr.elephant. But the detail of each dimension has disappeared, and I will double check the configuration.

I have found the reason why the details disappeared. The "feature_2.3" branch use "org.avaje.ebeanorm.avaje-ebeanorm-3.2.4.jar" and "org.avaje.ebeanorm.avaje-ebeanorm-agent-3.2.2.jar", which will lead to details missing. I replaced these two jars with ''avaje-ebeanorm-3.2.2.jar" and "avaje-ebeanorm-agent-3.2.1.jar", which the old version dr.elephant depends on, and then the details came back :)

@xglv1985 did you replace this in the ClassPath?

Yes, I did.

hi @ShubhamGupta29 , @xglv1985 is there a more suitable way? right now it seems the jars have to be renamed...

shagneet330 commented 3 years ago

hi @ShubhamGupta29 , @xglv1985 , Facing the same issue. Not able to double click on these metric dimensions. Using feature_2.3 branch. How can this be corrected?

ShubhamGupta29 commented 3 years ago

Hi @shagneet330 the fix is provided in the above comment. I have found the reason why the details disappeared. The "feature_2.3" branch use "org.avaje.ebeanorm.avaje-ebeanorm-3.2.4.jar" and "org.avaje.ebeanorm.avaje-ebeanorm-agent-3.2.2.jar", which will lead to details missing. I replaced these two jars with ''avaje-ebeanorm-3.2.2.jar" and "avaje-ebeanorm-agent-3.2.1.jar", which the old version dr.elephant depends on, and then the details came back :)

But I would suggest using the latest Ember UI, which would be available if your compilation went well. You can access the new UI by adding new# after Dr.E endpoint. e.g. http://hostname:8080/new#, if you can see UI like below then there should be no issue. Screenshot 2021-02-02 at 9 45 47 PM

shagneet330 commented 3 years ago

@ShubhamGupta29 Tried with http://hostname:8080/new# but this doesn't seem to load. Are these changes available in "feature_2.3" branch?

ShubhamGupta29 commented 3 years ago

It is available in feature_2.3, but your compilation should be successful with Ember. NPM should be available in your system. You see this kind of log during compilation:

` "############################################################################"

"npm installation found, we'll compile with the new user interface" "############################################################################" ` Need to monitor the compilation as you face issues while compiling new UI.

tcluzhe commented 3 years ago

@ShubhamGupta29 would you help add spark application name in the new UI? image image

Ashnee1990 commented 3 years ago

@SHubhamGupta29, I am facing issue while compiling .the feature_spark2.3, master and finalspark23 branch. Could you please help.

Below is the error.

elephantspark23/dr-elephant/app/org/apache/spark/status/CustomAppStatusListener.scala:628: value getPartitions is not a member of org.apache.spark.status.LiveRDD [error] liveRDD.getPartitions().foreach { case (, part) => [error] ^ [error] elephant_spark23/dr-elephant/app/org/apache/spark/status/CustomAppStatusListener.scala:629: value executors is not a member of Any [error] part.executors.foreach { executorId => [error] ^ [error] elephant_spark23/dr-elephant/app/org/apache/spark/status/CustomAppStatusListener.scala:639: value getDistributions is not a member of org.apache.spark.status.LiveRDD [error] liveRDD.getDistributions().foreach { case (executorId, rddDist) => [error] ^ [error] elephant_spark23/dr-elephant/app/org/apache/spark/status/CustomAppStatusListener.scala:640: type mismatch; [error] found : Any [error] required: String [error] liveExecutors.get(executorId).foreach { exec => [error] ^ [warn] one warning found [error] four errors found [error] (compile:compileIncremental) Compilation failed

Ashnee1990 commented 3 years ago

@ShubhamGupta29 @xglv1985 ,

Do we have resolved Peak Memory Used . Because I am still not able to see it. image