Open xuchunlai opened 1 year ago
Hello @xuchunlai, Thanks for finding the time to report the issue! We really appreciate the community's efforts to improve Apache Kyuubi.
the final status of pod is either Completed or NotReady
After the spark app exited, Completed is the expected status of Driver Pod
When the application completes, the executor pods terminate and are cleaned up, but the driver pod persists logs and remains in “completed” state in the Kubernetes API until it’s eventually garbage collected or manually cleaned up.
Ref: https://spark.apache.org/docs/latest/running-on-kubernetes.html#how-it-works
For NotReady, please provide the corresponding logs of Kyuubi Server.
After the spark app exited, Completed is the expected status of Driver Pod
Kyuubi Server are deployed on physical machines,Executer pod can terminated when using the configuration kyuubi.Engine.Share.Level=USER.
OK, your question is updated to say "Executor Pod should terminate but actually not".
Can you log in the executor Pod and use jstack
to check the threads' stack, to find which thread blocks the JVM shutdown?
Code of Conduct
Search before asking
Describe the bug
Executer pod can terminated when using the configuration kyuubi.Engine.Share.Level=USER. But executer pod cannot be terminated, when using the configuration kyuubi.Engine.Share.Level=CONNECTION,the final status of pod is either Completed or NotReady
Affects Version(s)
1.6.0
Kyuubi Server Log Output
Kyuubi Engine Log Output
Kyuubi Server Configurations
Kyuubi Engine Configurations
Additional context
kubernetes version is 1.15 and 1.21 spark version is 3.3.0 Kyuubi Server are deployed on physical machines
Are you willing to submit PR?