Closed lishiyucn closed 3 years ago
please update the title in English
please update the title in English
Please pay attention to the question instead of the language.
After the spark application runs, ds will use the application_id to query the application status on yarn. The application_id is stored in the app_link field of the t_ds_task_instance table. There will be a log for querying the status of the yarn application in the task instance log of ds. Please check it.
The title has not been updated for a long time in English, it will be closed. If there is a problem, please update the title in English and reopen it
I also encountered the same problem, what should I do
基础环境: spark 2.3.4 dolphinscheduler1.3.1 hdp3.1.0
使用dolphinscheduler1.3.1调度spark任务使用cluster模式,任务是正常执行成功的,但是dolphinscheduler现实任务执行失败。
dolphinscheduler的状态 一个图标× -解释为 失败
spark日志的状态: [INFO] 2020-09-24 03:50:41.140 - [taskAppId=TASK-3-12-12]:[121] - -> 20/09/24 03:50:41 INFO Client: client token: N/A diagnostics: N/A ApplicationMaster host: 10.2.12.3 ApplicationMaster RPC port: 0 queue: default start time: 1600919407936 final status: SUCCEEDED tracking URL: http://test-2:8088/proxy/application_1600308539958_0616/ user: bigdata 20/09/24 03:50:41 INFO Client: Deleted staging directory hdfs://test-2:8020/user/bigdata/.sparkStaging/application_1600308539958_0616 20/09/24 03:50:41 INFO ShutdownHookManager: Shutdown hook called 20/09/24 03:50:41 INFO ShutdownHookManager: Deleting directory /tmp/spark-0d9a9713-0990-4bde-90b2-110c3e3c0d8d 20/09/24 03:50:41 INFO ShutdownHookManager: Deleting directory /tmp/spark-db51f544-ec7e-4321-aa3d-8772773aff3d
yarn集群上的状态:
FINISHED | SUCCEEDED