apache / kyuubi

Apache Kyuubi is a distributed and multi-tenant gateway to provide serverless SQL on data warehouses and lakehouses.
https://kyuubi.apache.org/
Apache License 2.0
2.1k stars 914 forks source link

[Bug][Helm] Vanilla Helm deployment #4629

Open tnk-dev opened 1 year ago

tnk-dev commented 1 year ago

Code of Conduct

Search before asking

Describe the bug

Installing the vanilla helm deployment according to the docs on k8s

helm install kyuubi ${KYUUBI_HOME}/charts/kyuubi -n kyuubi --create-namespace

kubectl port-forward svc/kyuubi-thrift-binary 10009:10009 -n kyuubi

and then using the jdbc url in dbeaver or beeline results in the following error (jdbc:hive2://127.0.0.1:10009)

DBEAVER LOGS

Could not open client transport with JDBC Uri: jdbc:hive2://localhost:10009: org.apache.kyuubi.KyuubiSQLException: Timeout(180000 ms, you can modify kyuubi.session.engine.initialize.timeout to change it) to launched SPARK_SQL engine with /opt/kyuubi/externals/spark-3.3.2-bin-hadoop3/bin/spark-submit \
    --class org.apache.kyuubi.engine.spark.SparkSQLEngine \
    --conf spark.hive.server2.thrift.resultset.default.fetch.size=1000 \
    --conf spark.kyuubi.client.ipAddress=192.168.178.43 \
    --conf spark.kyuubi.client.version=1.7.0 \
    --conf spark.kyuubi.engine.submit.time=1680000909614 \
    --conf spark.kyuubi.frontend.protocols=THRIFT_BINARY \
    --conf spark.kyuubi.ha.addresses=kyuubi-684c9ccb7-4zdsf:2181 \
    --conf spark.kyuubi.ha.engine.ref.id=f7398328-da1e-4fc8-a34f-3f0481e40ca2 \
    --conf spark.kyuubi.ha.namespace=/kyuubi_1.8.0-SNAPSHOT_USER_SPARK_SQL/anonymous/default \
    --conf spark.kyuubi.ha.zookeeper.auth.type=NONE \
    --conf spark.kyuubi.kubernetes.namespace=kyuubi \
    --conf spark.kyuubi.server.ipAddress=127.0.0.1 \
    --conf spark.kyuubi.session.connection.url=localhost:10009 \
    --conf spark.kyuubi.session.real.user=anonymous \
    --conf spark.app.name=kyuubi_USER_SPARK_SQL_anonymous_default_f7398328-da1e-4fc8-a34f-3f0481e40ca2 \
    --conf spark.kubernetes.driver.label.kyuubi-unique-tag=f7398328-da1e-4fc8-a34f-3f0481e40ca2 \
    --conf spark.master=k8s://https://172.20.0.1:443 \
    --conf spark.kubernetes.driverEnv.SPARK_USER_NAME=anonymous \
    --conf spark.executorEnv.SPARK_USER_NAME=anonymous \
    --proxy-user anonymous /opt/kyuubi/externals/engines/spark/kyuubi-spark-sql-engine_2.12-1.8.0-SNAPSHOT.jar. (false,Target Pod(tag: f7398328-da1e-4fc8-a34f-3f0481e40ca2) is not found, due to pod have been deleted or not created)
    at org.apache.kyuubi.KyuubiSQLException$.apply(KyuubiSQLException.scala:69)
    at org.apache.kyuubi.engine.EngineRef.$anonfun$create$1(EngineRef.scala:250)
    at org.apache.kyuubi.ha.client.zookeeper.ZookeeperDiscoveryClient.tryWithLock(ZookeeperDiscoveryClient.scala:180)
    at org.apache.kyuubi.engine.EngineRef.tryWithLock(EngineRef.scala:171)
    at org.apache.kyuubi.engine.EngineRef.create(EngineRef.scala:176)
    at org.apache.kyuubi.engine.EngineRef.$anonfun$getOrCreate$1(EngineRef.scala:276)
    at scala.Option.getOrElse(Option.scala:189)
    at org.apache.kyuubi.engine.EngineRef.getOrCreate(EngineRef.scala:276)
    at org.apache.kyuubi.session.KyuubiSessionImpl.$anonfun$openEngineSession$2(KyuubiSessionImpl.scala:147)
    at org.apache.kyuubi.session.KyuubiSessionImpl.$anonfun$openEngineSession$2$adapted(KyuubiSessionImpl.scala:123)
    at org.apache.kyuubi.ha.client.DiscoveryClientProvider$.withDiscoveryClient(DiscoveryClientProvider.scala:36)
    at org.apache.kyuubi.session.KyuubiSessionImpl.$anonfun$openEngineSession$1(KyuubiSessionImpl.scala:123)
    at org.apache.kyuubi.session.KyuubiSession.handleSessionException(KyuubiSession.scala:49)
    at org.apache.kyuubi.session.KyuubiSessionImpl.openEngineSession(KyuubiSessionImpl.scala:123)
    at org.apache.kyuubi.operation.LaunchEngine.$anonfun$runInternal$2(LaunchEngine.scala:60)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:750)
Caused by: org.apache.kyuubi.KyuubiSQLException: org.apache.kyuubi.KyuubiSQLException: io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://172.20.0.1/api/v1/namespaces/default/pods?labelSelector=spark-app-selector%3Dspark-ac1644aa5038439092c7bdef0a00eedc%2Cspark-role%3Dexecutor&allowWatchBookmarks=true&watch=true. Message: Forbidden.
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:682)
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:661)
    at io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager.lambda$run$2(WatchConnectionManager.java:126)
    at java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:836)
    at java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:811)
    at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
    at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1990)
    at io.fabric8.kubernetes.client.okhttp.OkHttpWebSocketImpl$BuilderImpl$1.onFailure(OkHttpWebSocketImpl.java:66)
    at okhttp3.internal.ws.RealWebSocket.failWebSocket(RealWebSocket.java:571)
    at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:198)
    at okhttp3.RealCall$AsyncCall.execute(RealCall.java:203)
    at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:750)
 See more: /opt/kyuubi/work/anonymous/kyuubi-spark-sql-engine.log.1
    at org.apache.kyuubi.KyuubiSQLException$.apply(KyuubiSQLException.scala:69)
    at org.apache.kyuubi.engine.ProcBuilder.$anonfun$start$1(ProcBuilder.scala:229)
    at java.lang.Thread.run(Thread.java:750)
.
FYI: The last 10 line(s) of log are:
        at org.apache.spark.deploy.SparkSubmit$$anon$1.run(SparkSubmit.scala:163)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
        at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:163)
        at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
        at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
        at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1046)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1055)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
    at org.apache.kyuubi.KyuubiSQLException$.apply(KyuubiSQLException.scala:69)
    at org.apache.kyuubi.engine.ProcBuilder.getError(ProcBuilder.scala:275)
    at org.apache.kyuubi.engine.ProcBuilder.getError$(ProcBuilder.scala:264)
    at org.apache.kyuubi.engine.spark.SparkProcessBuilder.getError(SparkProcessBuilder.scala:37)
    at org.apache.kyuubi.engine.EngineRef.$anonfun$create$1(EngineRef.scala:253)
    ... 18 more

  org.apache.kyuubi.KyuubiSQLException: Timeout(180000 ms, you can modify kyuubi.session.engine.initialize.timeout to change it) to launched SPARK_SQL engine with /opt/kyuubi/externals/spark-3.3.2-bin-hadoop3/bin/spark-submit \
    --class org.apache.kyuubi.engine.spark.SparkSQLEngine \
    --conf spark.hive.server2.thrift.resultset.default.fetch.size=1000 \
    --conf spark.kyuubi.client.ipAddress=192.168.178.43 \
    --conf spark.kyuubi.client.version=1.7.0 \
    --conf spark.kyuubi.engine.submit.time=1680000909614 \
    --conf spark.kyuubi.frontend.protocols=THRIFT_BINARY \
    --conf spark.kyuubi.ha.addresses=kyuubi-684c9ccb7-4zdsf:2181 \
    --conf spark.kyuubi.ha.engine.ref.id=f7398328-da1e-4fc8-a34f-3f0481e40ca2 \
    --conf spark.kyuubi.ha.namespace=/kyuubi_1.8.0-SNAPSHOT_USER_SPARK_SQL/anonymous/default \
    --conf spark.kyuubi.ha.zookeeper.auth.type=NONE \
    --conf spark.kyuubi.kubernetes.namespace=kyuubi \
    --conf spark.kyuubi.server.ipAddress=127.0.0.1 \
    --conf spark.kyuubi.session.connection.url=localhost:10009 \
    --conf spark.kyuubi.session.real.user=anonymous \
    --conf spark.app.name=kyuubi_USER_SPARK_SQL_anonymous_default_f7398328-da1e-4fc8-a34f-3f0481e40ca2 \
    --conf spark.kubernetes.driver.label.kyuubi-unique-tag=f7398328-da1e-4fc8-a34f-3f0481e40ca2 \
    --conf spark.master=k8s://https://172.20.0.1:443 \
    --conf spark.kubernetes.driverEnv.SPARK_USER_NAME=anonymous \
    --conf spark.executorEnv.SPARK_USER_NAME=anonymous \
    --proxy-user anonymous /opt/kyuubi/externals/engines/spark/kyuubi-spark-sql-engine_2.12-1.8.0-SNAPSHOT.jar. (false,Target Pod(tag: f7398328-da1e-4fc8-a34f-3f0481e40ca2) is not found, due to pod have been deleted or not created)
    at org.apache.kyuubi.KyuubiSQLException$.apply(KyuubiSQLException.scala:69)
    at org.apache.kyuubi.engine.EngineRef.$anonfun$create$1(EngineRef.scala:250)
    at org.apache.kyuubi.ha.client.zookeeper.ZookeeperDiscoveryClient.tryWithLock(ZookeeperDiscoveryClient.scala:180)
    at org.apache.kyuubi.engine.EngineRef.tryWithLock(EngineRef.scala:171)
    at org.apache.kyuubi.engine.EngineRef.create(EngineRef.scala:176)
    at org.apache.kyuubi.engine.EngineRef.$anonfun$getOrCreate$1(EngineRef.scala:276)
    at scala.Option.getOrElse(Option.scala:189)
    at org.apache.kyuubi.engine.EngineRef.getOrCreate(EngineRef.scala:276)
    at org.apache.kyuubi.session.KyuubiSessionImpl.$anonfun$openEngineSession$2(KyuubiSessionImpl.scala:147)
    at org.apache.kyuubi.session.KyuubiSessionImpl.$anonfun$openEngineSession$2$adapted(KyuubiSessionImpl.scala:123)
    at org.apache.kyuubi.ha.client.DiscoveryClientProvider$.withDiscoveryClient(DiscoveryClientProvider.scala:36)
    at org.apache.kyuubi.session.KyuubiSessionImpl.$anonfun$openEngineSession$1(KyuubiSessionImpl.scala:123)
    at org.apache.kyuubi.session.KyuubiSession.handleSessionException(KyuubiSession.scala:49)
    at org.apache.kyuubi.session.KyuubiSessionImpl.openEngineSession(KyuubiSessionImpl.scala:123)
    at org.apache.kyuubi.operation.LaunchEngine.$anonfun$runInternal$2(LaunchEngine.scala:60)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:750)
Caused by: org.apache.kyuubi.KyuubiSQLException: org.apache.kyuubi.KyuubiSQLException: io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://172.20.0.1/api/v1/namespaces/default/pods?labelSelector=spark-app-selector%3Dspark-ac1644aa5038439092c7bdef0a00eedc%2Cspark-role%3Dexecutor&allowWatchBookmarks=true&watch=true. Message: Forbidden.
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:682)
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:661)
    at io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager.lambda$run$2(WatchConnectionManager.java:126)
    at java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:836)
    at java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:811)
    at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
    at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1990)
    at io.fabric8.kubernetes.client.okhttp.OkHttpWebSocketImpl$BuilderImpl$1.onFailure(OkHttpWebSocketImpl.java:66)
    at okhttp3.internal.ws.RealWebSocket.failWebSocket(RealWebSocket.java:571)
    at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:198)
    at okhttp3.RealCall$AsyncCall.execute(RealCall.java:203)
    at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:750)
 See more: /opt/kyuubi/work/anonymous/kyuubi-spark-sql-engine.log.1
    at org.apache.kyuubi.KyuubiSQLException$.apply(KyuubiSQLException.scala:69)
    at org.apache.kyuubi.engine.ProcBuilder.$anonfun$start$1(ProcBuilder.scala:229)
    at java.lang.Thread.run(Thread.java:750)
.
FYI: The last 10 line(s) of log are:
        at org.apache.spark.deploy.SparkSubmit$$anon$1.run(SparkSubmit.scala:163)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
        at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:163)
        at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
        at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
        at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1046)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1055)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
    at org.apache.kyuubi.KyuubiSQLException$.apply(KyuubiSQLException.scala:69)
    at org.apache.kyuubi.engine.ProcBuilder.getError(ProcBuilder.scala:275)
    at org.apache.kyuubi.engine.ProcBuilder.getError$(ProcBuilder.scala:264)
    at org.apache.kyuubi.engine.spark.SparkProcessBuilder.getError(SparkProcessBuilder.scala:37)
    at org.apache.kyuubi.engine.EngineRef.$anonfun$create$1(EngineRef.scala:253)
    ... 18 more

  org.apache.kyuubi.KyuubiSQLException: Timeout(180000 ms, you can modify kyuubi.session.engine.initialize.timeout to change it) to launched SPARK_SQL engine with /opt/kyuubi/externals/spark-3.3.2-bin-hadoop3/bin/spark-submit \
    --class org.apache.kyuubi.engine.spark.SparkSQLEngine \
    --conf spark.hive.server2.thrift.resultset.default.fetch.size=1000 \
    --conf spark.kyuubi.client.ipAddress=192.168.178.43 \
    --conf spark.kyuubi.client.version=1.7.0 \
    --conf spark.kyuubi.engine.submit.time=1680000909614 \
    --conf spark.kyuubi.frontend.protocols=THRIFT_BINARY \
    --conf spark.kyuubi.ha.addresses=kyuubi-684c9ccb7-4zdsf:2181 \
    --conf spark.kyuubi.ha.engine.ref.id=f7398328-da1e-4fc8-a34f-3f0481e40ca2 \
    --conf spark.kyuubi.ha.namespace=/kyuubi_1.8.0-SNAPSHOT_USER_SPARK_SQL/anonymous/default \
    --conf spark.kyuubi.ha.zookeeper.auth.type=NONE \
    --conf spark.kyuubi.kubernetes.namespace=kyuubi \
    --conf spark.kyuubi.server.ipAddress=127.0.0.1 \
    --conf spark.kyuubi.session.connection.url=localhost:10009 \
    --conf spark.kyuubi.session.real.user=anonymous \
    --conf spark.app.name=kyuubi_USER_SPARK_SQL_anonymous_default_f7398328-da1e-4fc8-a34f-3f0481e40ca2 \
    --conf spark.kubernetes.driver.label.kyuubi-unique-tag=f7398328-da1e-4fc8-a34f-3f0481e40ca2 \
    --conf spark.master=k8s://https://172.20.0.1:443 \
    --conf spark.kubernetes.driverEnv.SPARK_USER_NAME=anonymous \
    --conf spark.executorEnv.SPARK_USER_NAME=anonymous \
    --proxy-user anonymous /opt/kyuubi/externals/engines/spark/kyuubi-spark-sql-engine_2.12-1.8.0-SNAPSHOT.jar. (false,Target Pod(tag: f7398328-da1e-4fc8-a34f-3f0481e40ca2) is not found, due to pod have been deleted or not created)
    at org.apache.kyuubi.KyuubiSQLException$.apply(KyuubiSQLException.scala:69)
    at org.apache.kyuubi.engine.EngineRef.$anonfun$create$1(EngineRef.scala:250)
    at org.apache.kyuubi.ha.client.zookeeper.ZookeeperDiscoveryClient.tryWithLock(ZookeeperDiscoveryClient.scala:180)
    at org.apache.kyuubi.engine.EngineRef.tryWithLock(EngineRef.scala:171)
    at org.apache.kyuubi.engine.EngineRef.create(EngineRef.scala:176)
    at org.apache.kyuubi.engine.EngineRef.$anonfun$getOrCreate$1(EngineRef.scala:276)
    at scala.Option.getOrElse(Option.scala:189)
    at org.apache.kyuubi.engine.EngineRef.getOrCreate(EngineRef.scala:276)
    at org.apache.kyuubi.session.KyuubiSessionImpl.$anonfun$openEngineSession$2(KyuubiSessionImpl.scala:147)
    at org.apache.kyuubi.session.KyuubiSessionImpl.$anonfun$openEngineSession$2$adapted(KyuubiSessionImpl.scala:123)
    at org.apache.kyuubi.ha.client.DiscoveryClientProvider$.withDiscoveryClient(DiscoveryClientProvider.scala:36)
    at org.apache.kyuubi.session.KyuubiSessionImpl.$anonfun$openEngineSession$1(KyuubiSessionImpl.scala:123)
    at org.apache.kyuubi.session.KyuubiSession.handleSessionException(KyuubiSession.scala:49)
    at org.apache.kyuubi.session.KyuubiSessionImpl.openEngineSession(KyuubiSessionImpl.scala:123)
    at org.apache.kyuubi.operation.LaunchEngine.$anonfun$runInternal$2(LaunchEngine.scala:60)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:750)
Caused by: org.apache.kyuubi.KyuubiSQLException: org.apache.kyuubi.KyuubiSQLException: io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://172.20.0.1/api/v1/namespaces/default/pods?labelSelector=spark-app-selector%3Dspark-ac1644aa5038439092c7bdef0a00eedc%2Cspark-role%3Dexecutor&allowWatchBookmarks=true&watch=true. Message: Forbidden.
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:682)
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:661)
    at io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager.lambda$run$2(WatchConnectionManager.java:126)
    at java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:836)
    at java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:811)
    at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
    at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1990)
    at io.fabric8.kubernetes.client.okhttp.OkHttpWebSocketImpl$BuilderImpl$1.onFailure(OkHttpWebSocketImpl.java:66)
    at okhttp3.internal.ws.RealWebSocket.failWebSocket(RealWebSocket.java:571)
    at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:198)
    at okhttp3.RealCall$AsyncCall.execute(RealCall.java:203)
    at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:750)
 See more: /opt/kyuubi/work/anonymous/kyuubi-spark-sql-engine.log.1
    at org.apache.kyuubi.KyuubiSQLException$.apply(KyuubiSQLException.scala:69)
    at org.apache.kyuubi.engine.ProcBuilder.$anonfun$start$1(ProcBuilder.scala:229)
    at java.lang.Thread.run(Thread.java:750)
.
FYI: The last 10 line(s) of log are:
        at org.apache.spark.deploy.SparkSubmit$$anon$1.run(SparkSubmit.scala:163)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
        at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:163)
        at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
        at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
        at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1046)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1055)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
    at org.apache.kyuubi.KyuubiSQLException$.apply(KyuubiSQLException.scala:69)
    at org.apache.kyuubi.engine.ProcBuilder.getError(ProcBuilder.scala:275)
    at org.apache.kyuubi.engine.ProcBuilder.getError$(ProcBuilder.scala:264)
    at org.apache.kyuubi.engine.spark.SparkProcessBuilder.getError(SparkProcessBuilder.scala:37)
    at org.apache.kyuubi.engine.EngineRef.$anonfun$create$1(EngineRef.scala:253)
    ... 18 more

K8s Pod Logs

2023-03-28 10:54:31.901 INFO org.apache.kyuubi.operation.LaunchEngine: Processing anonymous's query[45f2f1fc-e5ba-490d-a509-473185e6b8d8]: PENDING_STATE -> RUNNING_STATE, statement:
LaunchEngine
2023-03-28 10:54:31.909 INFO org.apache.curator.framework.imps.CuratorFrameworkImpl: Starting
2023-03-28 10:54:31.910 INFO org.apache.zookeeper.ZooKeeper: Initiating client connection, connectString=kyuubi-684c9ccb7-4zdsf:2181 sessionTimeout=60000 watcher=org.apache.curator.ConnectionState@5d50b78d
2023-03-28 10:54:31.919 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server kyuubi-684c9ccb7-4zdsf/10.2.41.162:2181. Will not attempt to authenticate using SASL (unknown error)
2023-03-28 10:54:31.926 INFO org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket connection from /10.2.41.162:48604
2023-03-28 10:54:31.926 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to kyuubi-684c9ccb7-4zdsf/10.2.41.162:2181, initiating session
2023-03-28 10:54:31.928 INFO org.apache.zookeeper.server.ZooKeeperServer: Client attempting to establish new session at /10.2.41.162:48604
2023-03-28 10:54:31.943 INFO org.apache.zookeeper.server.ZooKeeperServer: Established session 0x100667d547a0002 with negotiated timeout 60000 for client /10.2.41.162:48604
2023-03-28 10:54:31.944 INFO org.apache.zookeeper.ClientCnxn: Session establishment complete on server kyuubi-684c9ccb7-4zdsf/10.2.41.162:2181, sessionid = 0x100667d547a0002, negotiated timeout = 60000
2023-03-28 10:54:31.944 INFO org.apache.curator.framework.state.ConnectionStateManager: State change: CONNECTED
2023-03-28 10:55:09.605 WARN org.apache.kyuubi.engine.KubernetesApplicationOperation: Get Tag: 465bd0da-e10c-4a42-b658-fae16f2985ca Driver Pod In Kubernetes size: 0, we expect 1
2023-03-28 10:55:09.617 INFO org.apache.kyuubi.engine.EngineRef: Launching engine:
--conf spark.kyuubi.frontend.protocols=THRIFT_BINARY \
    --conf spark.kyuubi.ha.addresses=kyuubi-684c9ccb7-4zdsf:2181 \
    --conf spark.kyuubi.ha.engine.ref.id=f7398328-da1e-4fc8-a34f-3f0481e40ca2 \
    --conf spark.kyuubi.ha.namespace=/kyuubi_1.8.0-SNAPSHOT_USER_SPARK_SQL/anonymous/default \
    --conf spark.kyuubi.ha.zookeeper.auth.type=NONE \
    --conf spark.kyuubi.kubernetes.namespace=kyuubi \
    --conf spark.kyuubi.server.ipAddress=127.0.0.1 \
    --conf spark.kyuubi.session.connection.url=localhost:10009 \
    --conf spark.kyuubi.session.real.user=anonymous \
    --conf spark.app.name=kyuubi_USER_SPARK_SQL_anonymous_default_f7398328-da1e-4fc8-a34f-3f0481e40ca2 \
    --conf spark.kubernetes.driver.label.kyuubi-unique-tag=f7398328-da1e-4fc8-a34f-3f0481e40ca2 \
    --conf spark.master=k8s://https://172.20.0.1:443 \
    --conf spark.kubernetes.driverEnv.SPARK_USER_NAME=anonymous \
    --conf spark.executorEnv.SPARK_USER_NAME=anonymous \
    --proxy-user anonymous /opt/kyuubi/externals/engines/spark/kyuubi-spark-sql-engine_2.12-1.8.0-SNAPSHOT.jar
2023-03-28 10:55:09.620 INFO org.apache.curator.framework.imps.CuratorFrameworkImpl: backgroundOperationsLoop exiting
2023-03-28 10:55:09.621 INFO org.apache.kyuubi.engine.ProcBuilder: Logging to /opt/kyuubi/work/anonymous/kyuubi-spark-sql-engine.log.1
2023-03-28 10:55:09.626 INFO org.apache.zookeeper.server.PrepRequestProcessor: Processed session termination for sessionid: 0x100667d547a0001
2023-03-28 10:55:09.630 INFO org.apache.zookeeper.ZooKeeper: Session: 0x100667d547a0001 closed
2023-03-28 10:55:09.632 INFO org.apache.zookeeper.server.NIOServerCnxn: Closed socket connection for client /10.2.41.162:33976 which had sessionid 0x100667d547a0001
2023-03-28 10:55:09.630 INFO org.apache.zookeeper.ClientCnxn: EventThread shut down for session: 0x100667d547a0001
2023-03-28 10:55:09.647 INFO org.apache.kyuubi.operation.LaunchEngine: Processing anonymous's query[bd0bec2c-a9cd-418a-8274-bf57388ad9a7]: RUNNING_STATE -> ERROR_STATE, time taken: 182.128 seconds
2023-03-28 10:55:09.726 INFO org.apache.kyuubi.server.KyuubiTBinaryFrontendService: Received request of closing SessionHandle [465bd0da-e10c-4a42-b658-fae16f2985ca]
2023-03-28 10:55:09.733 INFO org.apache.kyuubi.session.KyuubiSessionManager: anonymous's session with SessionHandle [465bd0da-e10c-4a42-b658-fae16f2985ca] is closed, current opening sessions 1
2023-03-28 10:55:09.801 INFO org.apache.kyuubi.server.KyuubiTBinaryFrontendService: Finished closing SessionHandle [465bd0da-e10c-4a42-b658-fae16f2985ca]
2023-03-28 10:58:10.029 WARN org.apache.kyuubi.engine.KubernetesApplicationOperation: Get Tag: f7398328-da1e-4fc8-a34f-3f0481e40ca2 Driver Pod In Kubernetes size: 0, we expect 1
2023-03-28 10:58:10.036 INFO org.apache.curator.framework.imps.CuratorFrameworkImpl: backgroundOperationsLoop exiting
2023-03-28 10:58:10.056 INFO org.apache.zookeeper.server.PrepRequestProcessor: Processed session termination for sessionid: 0x100667d547a0002
2023-03-28 10:58:10.059 INFO org.apache.zookeeper.ZooKeeper: Session: 0x100667d547a0002 closed
2023-03-28 10:58:10.059 INFO org.apache.zookeeper.ClientCnxn: EventThread shut down for session: 0x100667d547a0002
2023-03-28 10:58:10.065 INFO org.apache.zookeeper.server.NIOServerCnxn: Closed socket connection for client /10.2.41.162:48604 which had sessionid 0x100667d547a0002
2023-03-28 10:58:10.066 INFO org.apache.kyuubi.operation.LaunchEngine: Processing anonymous's query[45f2f1fc-e5ba-490d-a509-473185e6b8d8]: RUNNING_STATE -> ERROR_STATE, time taken: 218.165 seconds
2023-03-28 10:58:10.166 INFO org.apache.kyuubi.server.KyuubiTBinaryFrontendService: Received request of closing SessionHandle [f7398328-da1e-4fc8-a34f-3f0481e40ca2]
2023-03-28 10:58:10.166 INFO org.apache.kyuubi.session.KyuubiSessionManager: anonymous's session with SessionHandle [f7398328-da1e-4fc8-a34f-3f0481e40ca2] is closed, current opening sessions 0
2023-03-28 10:58:10.168 INFO org.apache.kyuubi.server.KyuubiTBinaryFrontendService: Finished closing SessionHandle [f7398328-da1e-4fc8-a34f-3f0481e40ca2]
helm get all kyuubi -n kyuubi
NAME: kyuubi
LAST DEPLOYED: Tue Mar 28 12:32:15 2023
NAMESPACE: kyuubi
STATUS: deployed
REVISION: 1
TEST SUITE: None
USER-SUPPLIED VALUES:
affinity: {}
containers: []
env: []
envFrom: []
image:
  pullPolicy: Always
  repository: apache/kyuubi
  tag: null
imagePullSecrets: []
initContainers: []
kyuubiConf:
  kyuubiDefaults: null
  kyuubiEnv: null
  log4j2: null
kyuubiConfDir: /opt/kyuubi/conf
nodeSelector: {}
probe:
  liveness:
    enabled: true
    failureThreshold: 10
    initialDelaySeconds: 30
    periodSeconds: 10
    successThreshold: 1
    timeoutSeconds: 2
  readiness:
    enabled: true
    failureThreshold: 10
    initialDelaySeconds: 30
    periodSeconds: 10
    successThreshold: 1
    timeoutSeconds: 2
rbac:
  create: true
  rules:
  - apiGroups:
    - ""
    resources:
    - pods
    verbs:
    - create
    - list
    - delete
replicaCount: 2
resources: {}
securityContext: {}
server:
  mysql:
    enabled: false
    port: 3309
    service:
      annotations: {}
      nodePort: null
      port: '{{ .Values.server.mysql.port }}'
      type: ClusterIP
  rest:
    enabled: false
    port: 10099
    service:
      annotations: {}
      nodePort: null
      port: '{{ .Values.server.rest.port }}'
      type: ClusterIP
  thriftBinary:
    enabled: true
    port: 10009
    service:
      annotations: {}
      nodePort: null
      port: '{{ .Values.server.thriftBinary.port }}'
      type: ClusterIP
  thriftHttp:
    enabled: false
    port: 10010
    service:
      annotations: {}
      nodePort: null
      port: '{{ .Values.server.thriftHttp.port }}'
      type: ClusterIP
serviceAccount:
  create: true
  name: null
tolerations: []
volumeMounts: []
volumes: []

COMPUTED VALUES:
affinity: {}
containers: []
env: []
envFrom: []
image:
  pullPolicy: Always
  repository: apache/kyuubi
imagePullSecrets: []
initContainers: []
kyuubiConf: {}
kyuubiConfDir: /opt/kyuubi/conf
nodeSelector: {}
probe:
  liveness:
    enabled: true
    failureThreshold: 10
    initialDelaySeconds: 30
    periodSeconds: 10
    successThreshold: 1
    timeoutSeconds: 2
  readiness:
    enabled: true
    failureThreshold: 10
    initialDelaySeconds: 30
    periodSeconds: 10
    successThreshold: 1
    timeoutSeconds: 2
rbac:
  create: true
  rules:
  - apiGroups:
    - ""
    resources:
    - pods
    verbs:
    - create
    - list
    - delete
replicaCount: 2
resources: {}
securityContext: {}
server:
  mysql:
    enabled: false
    port: 3309
    service:
      annotations: {}
      port: '{{ .Values.server.mysql.port }}'
      type: ClusterIP
  rest:
    enabled: false
    port: 10099
    service:
      annotations: {}
      port: '{{ .Values.server.rest.port }}'
      type: ClusterIP
  thriftBinary:
    enabled: true
    port: 10009
    service:
      annotations: {}
      port: '{{ .Values.server.thriftBinary.port }}'
      type: ClusterIP
  thriftHttp:
    enabled: false
    port: 10010
    service:
      annotations: {}
      port: '{{ .Values.server.thriftHttp.port }}'
      type: ClusterIP
serviceAccount:
  create: true
tolerations: []
volumeMounts: []
volumes: []

HOOKS:
MANIFEST:
---
# Source: kyuubi/templates/kyuubi-serviceaccount.yaml
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kyuubi
  labels:
    helm.sh/chart: kyuubi-0.1.0
    app.kubernetes.io/name: kyuubi
    app.kubernetes.io/instance: kyuubi
    app.kubernetes.io/version: "master-snapshot"
    app.kubernetes.io/managed-by: Helm
---
# Source: kyuubi/templates/kyuubi-configmap.yaml
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

apiVersion: v1
kind: ConfigMap
metadata:
  name: kyuubi
  labels:
    helm.sh/chart: kyuubi-0.1.0
    app.kubernetes.io/name: kyuubi
    app.kubernetes.io/instance: kyuubi
    app.kubernetes.io/version: "master-snapshot"
    app.kubernetes.io/managed-by: Helm
data:
  kyuubi-defaults.conf: |
    ## Helm chart provided Kyuubi configurations
    kyuubi.kubernetes.namespace=kyuubi
    kyuubi.frontend.bind.host=localhost
    kyuubi.frontend.thrift.binary.bind.port=10009
    kyuubi.frontend.thrift.http.bind.port=10010
    kyuubi.frontend.rest.bind.port=10099
    kyuubi.frontend.mysql.bind.port=3309
    kyuubi.frontend.protocols=THRIFT_BINARY

    ## User provided Kyuubi configurations
---
# Source: kyuubi/templates/kyuubi-role.yaml
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: kyuubi
  labels:
    helm.sh/chart: kyuubi-0.1.0
    app.kubernetes.io/name: kyuubi
    app.kubernetes.io/instance: kyuubi
    app.kubernetes.io/version: "master-snapshot"
    app.kubernetes.io/managed-by: Helm
rules:
  - apiGroups:
    - ""
    resources:
    - pods
    verbs:
    - create
    - list
    - delete
---
# Source: kyuubi/templates/kyuubi-rolebinding.yaml
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: kyuubi
  labels:
    helm.sh/chart: kyuubi-0.1.0
    app.kubernetes.io/name: kyuubi
    app.kubernetes.io/instance: kyuubi
    app.kubernetes.io/version: "master-snapshot"
    app.kubernetes.io/managed-by: Helm
subjects:
  - kind: ServiceAccount
    name: kyuubi
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kyuubi
---
# Source: kyuubi/templates/kyuubi-service.yaml
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
apiVersion: v1
kind: Service
metadata:
  name: kyuubi-thrift-binary
  labels:
    helm.sh/chart: kyuubi-0.1.0
    app.kubernetes.io/name: kyuubi
    app.kubernetes.io/instance: kyuubi
    app.kubernetes.io/version: "master-snapshot"
    app.kubernetes.io/managed-by: Helm
spec:
  type: ClusterIP
  ports:
    - name: thrift-binary
      port: 10009
      targetPort: 10009
  selector:
    app.kubernetes.io/name: kyuubi
    app.kubernetes.io/instance: kyuubi
---
# Source: kyuubi/templates/kyuubi-deployment.yaml
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

apiVersion: apps/v1
kind: Deployment
metadata:
  name: kyuubi
  labels:
    helm.sh/chart: kyuubi-0.1.0
    app.kubernetes.io/name: kyuubi
    app.kubernetes.io/instance: kyuubi
    app.kubernetes.io/version: "master-snapshot"
    app.kubernetes.io/managed-by: Helm
spec:
  replicas: 2
  selector:
    matchLabels:
      app.kubernetes.io/name: kyuubi
      app.kubernetes.io/instance: kyuubi
  template:
    metadata:
      labels:
        app.kubernetes.io/name: kyuubi
        app.kubernetes.io/instance: kyuubi
      annotations:
        checksum/conf: d297f8fabfdfc4c048ebdc3c0cc9b2da6b2741fe0eed4b96ead53fb747d8d68d
    spec:
      serviceAccountName: kyuubi
      containers:
        - name: kyuubi-server
          image: "apache/kyuubi:master-snapshot"
          imagePullPolicy: Always
          ports:
            - name: thrift-binary
              containerPort: 10009
          livenessProbe:
            exec:
              command: ["/bin/bash", "-c", "bin/kyuubi status"]
            initialDelaySeconds: 30
            periodSeconds: 10
            timeoutSeconds: 2
            failureThreshold: 10
            successThreshold: 1
          readinessProbe:
            exec:
              command: ["/bin/bash", "-c", "$KYUUBI_HOME/bin/kyuubi status"]
            initialDelaySeconds: 30
            periodSeconds: 10
            timeoutSeconds: 2
            failureThreshold: 10
            successThreshold: 1
          volumeMounts:
            - name: conf
              mountPath: /opt/kyuubi/conf
      volumes:
        - name: conf
          configMap:
            name: kyuubi

NOTES:
The chart has been installed!

In order to check the release status, use:
  helm status kyuubi -n kyuubi
    or for more detailed info
  helm get all kyuubi -n kyuubi

************************
******* Services *******
************************
THRIFT_BINARY:
- To access kyuubi-thrift-binary service within the cluster, use the following URL:
    kyuubi-thrift-binary.kyuubi.svc.cluster.local
- To access kyuubi-thrift-binary service from outside the cluster for debugging, run the following command:
    kubectl port-forward svc/kyuubi-thrift-binary 10009:10009 -n kyuubi
  and use 127.0.0.1:10009

Any Ideas?

Affects Version(s)

1.7.0

Kyuubi Server Log Output

No response

Kyuubi Engine Log Output

No response

Kyuubi Server Configurations

No response

Kyuubi Engine Configurations

No response

Additional context

No response

Are you willing to submit PR?

github-actions[bot] commented 1 year ago

Hello @tnk-dev, Thanks for finding the time to report the issue! We really appreciate the community's efforts to improve Apache Kyuubi.

tnk-dev commented 1 year ago

what stands out to me is

GET at: https://172.20.0.1/api/v1/namespaces/default/pods?labelSelector=spark-app-selector%3Dspark-ac1644aa5038439092c7bdef0a00eedc%2Cspark-role%3Dexecutor&allowWatchBookmarks=true&watch=true. Message: Forbidden.
tnk-dev commented 1 year ago

@dnskr any idea?

dnskr commented 1 year ago

Hi! As I see apache/kyuubi:master-snapshot image is used. Could you please try to reproduce the issue with current released version? I mean one of the following images from docker hub:

zwangsheng commented 1 year ago

Hi @tnk-dev For additional information, could you please paste spark submit log /opt/kyuubi/work/anonymous/kyuubi-spark-sql-engine.log.1, which is generally recorded by See more: /opt/kyuubi/work/anonymous/kyuubi-spark-sql-engine.log.1 or org.apache.kyuubi.engine.ProcBuilder: Logging to /opt/kyuubi/work/anonymous/kyuubi-spark-sql-engine.log.1 in the kyuubi log.

tnk-dev commented 1 year ago

Hey @zwangsheng @dnskr,

Using one of the mentioned releases indeed solved the error, thank you!

Running Kyuubi locally according to the quick start guide worked fine without having any users setup (anonymous).

With helm though, it complains and I cannot connect to Kyuubi via DBeaver using the above mentioned connection url:

POD Logs

Warn: Not find kyuubi environment file /opt/kyuubi/conf/kyuubi-env.sh, using default ones...
JAVA_HOME: /opt/java/openjdk
KYUUBI_HOME: /opt/kyuubi
KYUUBI_CONF_DIR: /opt/kyuubi/conf
KYUUBI_LOG_DIR: /opt/kyuubi/logs
KYUUBI_PID_DIR: /opt/kyuubi/pid
KYUUBI_WORK_DIR_ROOT: /opt/kyuubi/work
FLINK_HOME: /opt/kyuubi/externals/flink-1.16.1
FLINK_ENGINE_HOME: /opt/kyuubi/externals/engines/flink
SPARK_HOME: /opt/kyuubi/externals/spark-3.3.2-bin-hadoop3
SPARK_CONF_DIR: /opt/kyuubi/externals/spark-3.3.2-bin-hadoop3/conf
SPARK_ENGINE_HOME: /opt/kyuubi/externals/engines/spark
TRINO_ENGINE_HOME: /opt/kyuubi/externals/engines/trino
HIVE_ENGINE_HOME: /opt/kyuubi/externals/engines/hive
HADOOP_CONF_DIR: 
YARN_CONF_DIR: 
Starting org.apache.kyuubi.server.KyuubiServer
2023-03-29 15:04:55.409 INFO org.apache.kyuubi.server.KyuubiServer: 
                  Welcome to
  __  __                           __
 /\ \/\ \                         /\ \      __
 \ \ \/'/'  __  __  __  __  __  __\ \ \____/\_\
  \ \ , <  /\ \/\ \/\ \/\ \/\ \/\ \\ \ '__`\/\ \
   \ \ \\`\\ \ \_\ \ \ \_\ \ \ \_\ \\ \ \L\ \ \ \
    \ \_\ \_\/`____ \ \____/\ \____/ \ \_,__/\ \_\
     \/_/\/_/`/___/> \/___/  \/___/   \/___/  \/_/
                /\___/
                \/__/

2023-03-29 15:04:55.418 INFO org.apache.kyuubi.server.KyuubiServer: Version: 1.7.0, Revision: 3e1fb98a276036e95d4025a391a05d158681295d (2023-03-08 17:22:48 +0800), Branch: HEAD, Java: 1.8, Scala: 2.12, Spark: 3.3.2, Hadoop: 3.3.4, Hive: 3.1.3, Flink: 1.16.1, Trino: 363
2023-03-29 15:04:55.425 INFO org.apache.kyuubi.server.KyuubiServer: Using Scala version 2.12.17, OpenJDK 64-Bit Server VM, 1.8.0_362
2023-03-29 15:04:55.435 INFO org.apache.kyuubi.util.SignalRegister: Registering signal handler for TERM
2023-03-29 15:04:55.437 INFO org.apache.kyuubi.util.SignalRegister: Registering signal handler for HUP
2023-03-29 15:04:55.437 INFO org.apache.kyuubi.util.SignalRegister: Registering signal handler for INT
2023-03-29 15:04:55.817 INFO org.apache.kyuubi.Utils: Loading Kyuubi properties from /opt/kyuubi/conf/kyuubi-defaults.conf
2023-03-29 15:04:56.044 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2023-03-29 15:04:56.154 INFO org.apache.zookeeper.server.ZooKeeperServer: Server environment:zookeeper.version=3.4.14-4c25d480e66aadd371de8bd2fd8da255ac140bcf, built on 03/06/2019 16:18 GMT
2023-03-29 15:04:56.155 INFO org.apache.zookeeper.server.ZooKeeperServer: Server environment:host.name=kyuubi-88df75948-4v5gx
2023-03-29 15:04:56.155 INFO org.apache.zookeeper.server.ZooKeeperServer: Server environment:java.version=1.8.0_362
2023-03-29 15:04:56.155 INFO org.apache.zookeeper.server.ZooKeeperServer: Server environment:java.vendor=Temurin
2023-03-29 15:04:56.155 INFO org.apache.zookeeper.server.ZooKeeperServer: Server environment:java.home=/opt/java/openjdk/jre
2023-03-29 15:04:56.155 INFO org.apache.zookeeper.server.ZooKeeperServer: Server environment:java.class.path=/opt/kyuubi/jars/HikariCP-4.0.3.jar:/opt/kyuubi/jars/ST4-4.3.4.jar:/opt/kyuubi/jars/animal-sniffer-annotations-1.21.jar:/opt/kyuubi/jars/annotations-4.1.1.4.jar:/opt/kyuubi/jars/antlr-runtime-3.5.3.jar:/opt/kyuubi/jars/antlr4-runtime-4.9.3.jar:/opt/kyuubi/jars/aopalliance-repackaged-2.6.1.jar:/opt/kyuubi/jars/automaton-1.11-8.jar:/opt/kyuubi/jars/classgraph-4.8.138.jar:/opt/kyuubi/jars/commons-codec-1.15.jar:/opt/kyuubi/jars/commons-collections-3.2.2.jar:/opt/kyuubi/jars/commons-lang-2.6.jar:/opt/kyuubi/jars/commons-lang3-3.12.0.jar:/opt/kyuubi/jars/commons-logging-1.1.3.jar:/opt/kyuubi/jars/curator-client-2.12.0.jar:/opt/kyuubi/jars/curator-framework-2.12.0.jar:/opt/kyuubi/jars/curator-recipes-2.12.0.jar:/opt/kyuubi/jars/derby-10.14.2.0.jar:/opt/kyuubi/jars/error_prone_annotations-2.14.0.jar:/opt/kyuubi/jars/failsafe-2.4.4.jar:/opt/kyuubi/jars/failureaccess-1.0.1.jar:/opt/kyuubi/jars/fliptables-1.0.2.jar:/opt/kyuubi/jars/generex-1.0.2.jar:/opt/kyuubi/jars/grpc-api-1.48.0.jar:/opt/kyuubi/jars/grpc-context-1.48.0.jar:/opt/kyuubi/jars/grpc-core-1.48.0.jar:/opt/kyuubi/jars/grpc-grpclb-1.48.0.jar:/opt/kyuubi/jars/grpc-netty-1.48.0.jar:/opt/kyuubi/jars/grpc-protobuf-1.48.0.jar:/opt/kyuubi/jars/grpc-protobuf-lite-1.48.0.jar:/opt/kyuubi/jars/grpc-stub-1.48.0.jar:/opt/kyuubi/jars/gson-2.9.0.jar:/opt/kyuubi/jars/guava-31.1-jre.jar:/opt/kyuubi/jars/hadoop-client-api-3.3.4.jar:/opt/kyuubi/jars/hadoop-client-runtime-3.3.4.jar:/opt/kyuubi/jars/hive-common-3.1.3.jar:/opt/kyuubi/jars/hive-metastore-3.1.3.jar:/opt/kyuubi/jars/hive-serde-3.1.3.jar:/opt/kyuubi/jars/hive-service-rpc-3.1.3.jar:/opt/kyuubi/jars/hive-shims-0.23-3.1.3.jar:/opt/kyuubi/jars/hive-shims-common-3.1.3.jar:/opt/kyuubi/jars/hive-standalone-metastore-3.1.3.jar:/opt/kyuubi/jars/hive-storage-api-2.7.0.jar:/opt/kyuubi/jars/hk2-api-2.6.1.jar:/opt/kyuubi/jars/hk2-locator-2.6.1.jar:/opt/kyuubi/jars/hk2-utils-2.6.1.jar:/opt/kyuubi/jars/httpclient-4.5.14.jar:/opt/kyuubi/jars/httpcore-4.4.16.jar:/opt/kyuubi/jars/httpmime-4.5.14.jar:/opt/kyuubi/jars/j2objc-annotations-1.3.jar:/opt/kyuubi/jars/jackson-annotations-2.14.2.jar:/opt/kyuubi/jars/jackson-core-2.14.2.jar:/opt/kyuubi/jars/jackson-databind-2.14.2.jar:/opt/kyuubi/jars/jackson-dataformat-yaml-2.14.2.jar:/opt/kyuubi/jars/jackson-datatype-jdk8-2.14.2.jar:/opt/kyuubi/jars/jackson-datatype-jsr310-2.14.2.jar:/opt/kyuubi/jars/jackson-jaxrs-base-2.14.2.jar:/opt/kyuubi/jars/jackson-jaxrs-json-provider-2.14.2.jar:/opt/kyuubi/jars/jackson-module-jaxb-annotations-2.14.2.jar:/opt/kyuubi/jars/jackson-module-scala_2.12-2.14.2.jar:/opt/kyuubi/jars/jakarta.annotation-api-1.3.5.jar:/opt/kyuubi/jars/jakarta.inject-2.6.1.jar:/opt/kyuubi/jars/jakarta.servlet-api-4.0.4.jar:/opt/kyuubi/jars/jakarta.validation-api-2.0.2.jar:/opt/kyuubi/jars/jakarta.ws.rs-api-2.1.6.jar:/opt/kyuubi/jars/jakarta.xml.bind-api-2.3.2.jar:/opt/kyuubi/jars/javassist-3.25.0-GA.jar:/opt/kyuubi/jars/jcl-over-slf4j-1.7.36.jar:/opt/kyuubi/jars/jersey-client-2.39.jar:/opt/kyuubi/jars/jersey-common-2.39.jar:/opt/kyuubi/jars/jersey-container-servlet-core-2.39.jar:/opt/kyuubi/jars/jersey-entity-filtering-2.39.jar:/opt/kyuubi/jars/jersey-hk2-2.39.jar:/opt/kyuubi/jars/jersey-media-json-jackson-2.39.jar:/opt/kyuubi/jars/jersey-media-multipart-2.39.jar:/opt/kyuubi/jars/jersey-server-2.39.jar:/opt/kyuubi/jars/jetcd-api-0.7.3.jar:/opt/kyuubi/jars/jetcd-common-0.7.3.jar:/opt/kyuubi/jars/jetcd-core-0.7.3.jar:/opt/kyuubi/jars/jetcd-grpc-0.7.3.jar:/opt/kyuubi/jars/jetty-http-9.4.50.v20221201.jar:/opt/kyuubi/jars/jetty-io-9.4.50.v20221201.jar:/opt/kyuubi/jars/jetty-security-9.4.50.v20221201.jar:/opt/kyuubi/jars/jetty-server-9.4.50.v20221201.jar:/opt/kyuubi/jars/jetty-servlet-9.4.50.v20221201.jar:/opt/kyuubi/jars/jetty-util-9.4.50.v20221201.jar:/opt/kyuubi/jars/jetty-util-ajax-9.4.50.v20221201.jar:/opt/kyuubi/jars/jline-0.9.94.jar:/opt/kyuubi/jars/jul-to-slf4j-1.7.36.jar:/opt/kyuubi/jars/kubernetes-client-5.12.1.jar:/opt/kyuubi/jars/kubernetes-model-admissionregistration-5.12.1.jar:/opt/kyuubi/jars/kubernetes-model-apiextensions-5.12.1.jar:/opt/kyuubi/jars/kubernetes-model-apps-5.12.1.jar:/opt/kyuubi/jars/kubernetes-model-autoscaling-5.12.1.jar:/opt/kyuubi/jars/kubernetes-model-batch-5.12.1.jar:/opt/kyuubi/jars/kubernetes-model-common-5.12.1.jar:/opt/kyuubi/jars/kubernetes-model-certificates-5.12.1.jar:/opt/kyuubi/jars/kubernetes-model-coordination-5.12.1.jar:/opt/kyuubi/jars/kubernetes-model-core-5.12.1.jar:/opt/kyuubi/jars/kubernetes-model-discovery-5.12.1.jar:/opt/kyuubi/jars/kubernetes-model-events-5.12.1.jar:/opt/kyuubi/jars/kubernetes-model-extensions-5.12.1.jar:/opt/kyuubi/jars/kubernetes-model-flowcontrol-5.12.1.jar:/opt/kyuubi/jars/kubernetes-model-metrics-5.12.1.jar:/opt/kyuubi/jars/kubernetes-model-networking-5.12.1.jar:/opt/kyuubi/jars/kubernetes-model-node-5.12.1.jar:/opt/kyuubi/jars/kubernetes-model-policy-5.12.1.jar:/opt/kyuubi/jars/kubernetes-model-rbac-5.12.1.jar:/opt/kyuubi/jars/kubernetes-model-scheduling-5.12.1.jar:/opt/kyuubi/jars/kubernetes-model-storageclass-5.12.1.jar:/opt/kyuubi/jars/kyuubi-common_2.12-1.7.0.jar:/opt/kyuubi/jars/kyuubi-ctl_2.12-1.7.0.jar:/opt/kyuubi/jars/kyuubi-events_2.12-1.7.0.jar:/opt/kyuubi/jars/kyuubi-ha_2.12-1.7.0.jar:/opt/kyuubi/jars/kyuubi-metrics_2.12-1.7.0.jar:/opt/kyuubi/jars/kyuubi-rest-client-1.7.0.jar:/opt/kyuubi/jars/kyuubi-server-plugin-1.7.0.jar:/opt/kyuubi/jars/kyuubi-server_2.12-1.7.0.jar:/opt/kyuubi/jars/kyuubi-zookeeper_2.12-1.7.0.jar:/opt/kyuubi/jars/libfb303-0.9.3.jar:/opt/kyuubi/jars/libthrift-0.9.3.jar:/opt/kyuubi/jars/log4j-1.2-api-2.19.0.jar:/opt/kyuubi/jars/log4j-api-2.19.0.jar:/opt/kyuubi/jars/log4j-core-2.19.0.jar:/opt/kyuubi/jars/log4j-slf4j-impl-2.19.0.jar:/opt/kyuubi/jars/logging-interceptor-3.12.12.jar:/opt/kyuubi/jars/metrics-core-4.2.8.jar:/opt/kyuubi/jars/metrics-jmx-4.2.8.jar:/opt/kyuubi/jars/metrics-json-4.2.8.jar:/opt/kyuubi/jars/metrics-jvm-4.2.8.jar:/opt/kyuubi/jars/mimepull-1.9.15.jar:/opt/kyuubi/jars/netty-all-4.1.87.Final.jar:/opt/kyuubi/jars/netty-buffer-4.1.87.Final.jar:/opt/kyuubi/jars/netty-codec-4.1.87.Final.jar:/opt/kyuubi/jars/netty-codec-dns-4.1.87.Final.jar:/opt/kyuubi/jars/netty-codec-http-4.1.87.Final.jar:/opt/kyuubi/jars/netty-codec-http2-4.1.87.Final.jar:/opt/kyuubi/jars/netty-codec-socks-4.1.87.Final.jar:/opt/kyuubi/jars/netty-common-4.1.87.Final.jar:/opt/kyuubi/jars/netty-handler-4.1.87.Final.jar:/opt/kyuubi/jars/netty-handler-proxy-4.1.87.Final.jar:/opt/kyuubi/jars/netty-resolver-4.1.87.Final.jar:/opt/kyuubi/jars/netty-resolver-dns-4.1.87.Final.jar:/opt/kyuubi/jars/netty-transport-4.1.87.Final.jar:/opt/kyuubi/jars/netty-transport-classes-epoll-4.1.87.Final.jar:/opt/kyuubi/jars/netty-transport-native-epoll-4.1.87.Final-linux-aarch_64.jar:/opt/kyuubi/jars/netty-transport-native-epoll-4.1.87.Final-linux-x86_64.jar:/opt/kyuubi/jars/netty-transport-native-unix-common-4.1.87.Final.jar:/opt/kyuubi/jars/okhttp-3.12.12.jar:/opt/kyuubi/jars/okhttp-urlconnection-3.14.9.jar:/opt/kyuubi/jars/okio-1.15.0.jar:/opt/kyuubi/jars/osgi-resource-locator-1.0.3.jar:/opt/kyuubi/jars/paranamer-2.8.jar:/opt/kyuubi/jars/perfmark-api-0.25.0.jar:/opt/kyuubi/jars/proto-google-common-protos-2.9.0.jar:/opt/kyuubi/jars/protobuf-java-3.21.7.jar:/opt/kyuubi/jars/protobuf-java-util-3.21.7.jar:/opt/kyuubi/jars/scala-library-2.12.17.jar:/opt/kyuubi/jars/scopt_2.12-4.1.0.jar:/opt/kyuubi/jars/simpleclient-0.16.0.jar:/opt/kyuubi/jars/simpleclient_common-0.16.0.jar:/opt/kyuubi/jars/simpleclient_dropwizard-0.16.0.jar:/opt/kyuubi/jars/simpleclient_servlet-0.16.0.jar:/opt/kyuubi/jars/simpleclient_servlet_common-0.16.0.jar:/opt/kyuubi/jars/simpleclient_tracer_common-0.16.0.jar:/opt/kyuubi/jars/simpleclient_tracer_otel-0.16.0.jar:/opt/kyuubi/jars/simpleclient_tracer_otel_agent-0.16.0.jar:/opt/kyuubi/jars/slf4j-api-1.7.36.jar:/opt/kyuubi/jars/snakeyaml-1.33.jar:/opt/kyuubi/jars/swagger-annotations-2.2.1.jar:/opt/kyuubi/jars/swagger-core-2.2.1.jar:/opt/kyuubi/jars/swagger-integration-2.2.1.jar:/opt/kyuubi/jars/swagger-jaxrs2-2.2.1.jar:/opt/kyuubi/jars/swagger-models-2.2.1.jar:/opt/kyuubi/jars/swagger-ui-4.9.1.jar:/opt/kyuubi/jars/trino-client-363.jar:/opt/kyuubi/jars/units-1.6.jar:/opt/kyuubi/jars/vertx-core-4.3.2.jar:/opt/kyuubi/jars/vertx-grpc-4.3.2.jar:/opt/kyuubi/jars/zjsonpatch-0.3.0.jar:/opt/kyuubi/jars/zookeeper-3.4.14.jar:/opt/kyuubi/conf:
2023-03-29 15:04:56.156 INFO org.apache.zookeeper.server.ZooKeeperServer: Server environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2023-03-29 15:04:56.157 INFO org.apache.zookeeper.server.ZooKeeperServer: Server environment:java.io.tmpdir=/tmp
2023-03-29 15:04:56.157 INFO org.apache.zookeeper.server.ZooKeeperServer: Server environment:java.compiler=<NA>
2023-03-29 15:04:56.157 INFO org.apache.zookeeper.server.ZooKeeperServer: Server environment:os.name=Linux
2023-03-29 15:04:56.157 INFO org.apache.zookeeper.server.ZooKeeperServer: Server environment:os.arch=amd64
2023-03-29 15:04:56.157 INFO org.apache.zookeeper.server.ZooKeeperServer: Server environment:os.version=5.4.226-129.415.amzn2.x86_64
2023-03-29 15:04:56.157 INFO org.apache.zookeeper.server.ZooKeeperServer: Server environment:user.name=kyuubi
2023-03-29 15:04:56.157 INFO org.apache.zookeeper.server.ZooKeeperServer: Server environment:user.home=/home/kyuubi
2023-03-29 15:04:56.157 INFO org.apache.zookeeper.server.ZooKeeperServer: Server environment:user.dir=/opt/kyuubi
2023-03-29 15:04:56.178 INFO org.apache.zookeeper.server.ZooKeeperServer: Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 60000 datadir embedded_zookeeper/version-2 snapdir embedded_zookeeper/version-2
2023-03-29 15:04:56.178 INFO org.apache.zookeeper.server.ZooKeeperServer: minSessionTimeout set to 6000
2023-03-29 15:04:56.178 INFO org.apache.zookeeper.server.ZooKeeperServer: maxSessionTimeout set to 60000
2023-03-29 15:04:56.195 INFO org.apache.zookeeper.server.NIOServerCnxnFactory: binding to port kyuubi-88df75948-4v5gx/10.2.45.168:2181
2023-03-29 15:04:56.199 INFO org.apache.kyuubi.zookeeper.EmbeddedZookeeper: Service[EmbeddedZookeeper] is initialized.
2023-03-29 15:04:56.255 INFO org.apache.kyuubi.zookeeper.EmbeddedZookeeper: EmbeddedZookeeper is started at kyuubi-88df75948-4v5gx:2181
2023-03-29 15:04:56.281 INFO org.apache.kyuubi.zookeeper.EmbeddedZookeeper: Service[EmbeddedZookeeper] is started.
2023-03-29 15:04:56.346 INFO org.apache.kyuubi.server.KinitAuxiliaryService: Service[KinitAuxiliaryService] is initialized.
2023-03-29 15:04:56.347 INFO org.apache.kyuubi.server.PeriodicGCService: Service[PeriodicGCService] is initialized.
2023-03-29 15:04:56.520 INFO org.apache.kyuubi.metrics.JsonReporterService: Service[JsonReporterService] is initialized.
2023-03-29 15:04:56.520 INFO org.apache.kyuubi.metrics.MetricsSystem: Service[MetricsSystem] is initialized.
2023-03-29 15:04:56.533 INFO org.apache.kyuubi.util.ThreadUtils: KyuubiSessionManager-exec-pool: pool size: 100, wait queue size: 100, thread keepalive time: 60000 ms
2023-03-29 15:04:56.639 INFO org.apache.hadoop.yarn.client.DefaultNoHARMFailoverProxyProvider: Connecting to ResourceManager at /0.0.0.0:8032
2023-03-29 15:04:56.795 INFO org.apache.kyuubi.engine.YarnApplicationOperation: Successfully initialized yarn client: STARTED
2023-03-29 15:04:57.183 INFO org.apache.kyuubi.engine.KubernetesApplicationOperation: Start initializing Kubernetes Client.
2023-03-29 15:04:57.191 INFO org.apache.kyuubi.util.KubernetesUtils: Auto-configuring K8S client using current context from users K8S config file
2023-03-29 15:04:57.778 INFO org.apache.kyuubi.engine.KubernetesApplicationOperation: Initialized Kubernetes Client connect to: https://172.20.0.1:443/
2023-03-29 15:04:57.780 INFO org.apache.kyuubi.engine.KyuubiApplicationManager: Service[KyuubiApplicationManager] is initialized.
2023-03-29 15:04:57.912 WARN org.apache.kyuubi.credentials.HadoopCredentialsManager: Service hadoopfs does not require a token. Check your configuration to see if security is disabled or not. If security is enabled, some configurations of hadoopfs  might be missing, please check the configurations in  https://kyuubi.readthedocs.io/en/latest/security/hadoop_credentials_manager.html#required-security-configs
2023-03-29 15:04:57.916 INFO org.apache.hadoop.hive.conf.HiveConf: Found configuration file null
2023-03-29 15:04:58.169 WARN org.apache.kyuubi.credentials.HadoopCredentialsManager: Service hive does not require a token. Check your configuration to see if security is disabled or not. If security is enabled, some configurations of hive  might be missing, please check the configurations in  https://kyuubi.readthedocs.io/en/latest/security/hadoop_credentials_manager.html#required-security-configs
2023-03-29 15:04:58.172 WARN org.apache.kyuubi.credentials.HadoopCredentialsManager: No delegation token is required by services.
2023-03-29 15:04:58.173 INFO org.apache.kyuubi.credentials.HadoopCredentialsManager: Service[HadoopCredentialsManager] is initialized.
2023-03-29 15:04:58.185 INFO org.apache.kyuubi.operation.KyuubiOperationManager: Service[KyuubiOperationManager] is initialized.
2023-03-29 15:04:58.186 INFO org.apache.kyuubi.session.KyuubiSessionManager: Service[KyuubiSessionManager] is initialized.
2023-03-29 15:04:58.187 INFO org.apache.kyuubi.server.KyuubiServer: Service[KyuubiBackendService] is initialized.
2023-03-29 15:04:58.225 INFO org.apache.kyuubi.server.KyuubiTBinaryFrontendService: Initializing KyuubiTBinaryFrontend on localhost:10009 with [9, 999] worker threads
2023-03-29 15:04:58.301 INFO org.apache.curator.framework.imps.CuratorFrameworkImpl: Starting
2023-03-29 15:04:58.308 INFO org.apache.zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.14-4c25d480e66aadd371de8bd2fd8da255ac140bcf, built on 03/06/2019 16:18 GMT
2023-03-29 15:04:58.308 INFO org.apache.zookeeper.ZooKeeper: Client environment:host.name=kyuubi-88df75948-4v5gx
2023-03-29 15:04:58.308 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.version=1.8.0_362
2023-03-29 15:04:58.308 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.vendor=Temurin
2023-03-29 15:04:58.308 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.home=/opt/java/openjdk/jre
2023-03-29 15:04:58.308 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.class.path=/opt/kyuubi/jars/HikariCP-4.0.3.jar:/opt/kyuubi/jars/ST4-4.3.4.jar:/opt/kyuubi/jars/animal-sniffer-annotations-1.21.jar:/opt/kyuubi/jars/annotations-4.1.1.4.jar:/opt/kyuubi/jars/antlr-runtime-3.5.3.jar:/opt/kyuubi/jars/antlr4-runtime-4.9.3.jar:/opt/kyuubi/jars/aopalliance-repackaged-2.6.1.jar:/opt/kyuubi/jars/automaton-1.11-8.jar:/opt/kyuubi/jars/classgraph-4.8.138.jar:/opt/kyuubi/jars/commons-codec-1.15.jar:/opt/kyuubi/jars/commons-collections-3.2.2.jar:/opt/kyuubi/jars/commons-lang-2.6.jar:/opt/kyuubi/jars/commons-lang3-3.12.0.jar:/opt/kyuubi/jars/commons-logging-1.1.3.jar:/opt/kyuubi/jars/curator-client-2.12.0.jar:/opt/kyuubi/jars/curator-framework-2.12.0.jar:/opt/kyuubi/jars/curator-recipes-2.12.0.jar:/opt/kyuubi/jars/derby-10.14.2.0.jar:/opt/kyuubi/jars/error_prone_annotations-2.14.0.jar:/opt/kyuubi/jars/failsafe-2.4.4.jar:/opt/kyuubi/jars/failureaccess-1.0.1.jar:/opt/kyuubi/jars/fliptables-1.0.2.jar:/opt/kyuubi/jars/generex-1.0.2.jar:/opt/kyuubi/jars/grpc-api-1.48.0.jar:/opt/kyuubi/jars/grpc-context-1.48.0.jar:/opt/kyuubi/jars/grpc-core-1.48.0.jar:/opt/kyuubi/jars/grpc-grpclb-1.48.0.jar:/opt/kyuubi/jars/grpc-netty-1.48.0.jar:/opt/kyuubi/jars/grpc-protobuf-1.48.0.jar:/opt/kyuubi/jars/grpc-protobuf-lite-1.48.0.jar:/opt/kyuubi/jars/grpc-stub-1.48.0.jar:/opt/kyuubi/jars/gson-2.9.0.jar:/opt/kyuubi/jars/guava-31.1-jre.jar:/opt/kyuubi/jars/hadoop-client-api-3.3.4.jar:/opt/kyuubi/jars/hadoop-client-runtime-3.3.4.jar:/opt/kyuubi/jars/hive-common-3.1.3.jar:/opt/kyuubi/jars/hive-metastore-3.1.3.jar:/opt/kyuubi/jars/hive-serde-3.1.3.jar:/opt/kyuubi/jars/hive-service-rpc-3.1.3.jar:/opt/kyuubi/jars/hive-shims-0.23-3.1.3.jar:/opt/kyuubi/jars/hive-shims-common-3.1.3.jar:/opt/kyuubi/jars/hive-standalone-metastore-3.1.3.jar:/opt/kyuubi/jars/hive-storage-api-2.7.0.jar:/opt/kyuubi/jars/hk2-api-2.6.1.jar:/opt/kyuubi/jars/hk2-locator-2.6.1.jar:/opt/kyuubi/jars/hk2-utils-2.6.1.jar:/opt/kyuubi/jars/httpclient-4.5.14.jar:/opt/kyuubi/jars/httpcore-4.4.16.jar:/opt/kyuubi/jars/httpmime-4.5.14.jar:/opt/kyuubi/jars/j2objc-annotations-1.3.jar:/opt/kyuubi/jars/jackson-annotations-2.14.2.jar:/opt/kyuubi/jars/jackson-core-2.14.2.jar:/opt/kyuubi/jars/jackson-databind-2.14.2.jar:/opt/kyuubi/jars/jackson-dataformat-yaml-2.14.2.jar:/opt/kyuubi/jars/jackson-datatype-jdk8-2.14.2.jar:/opt/kyuubi/jars/jackson-datatype-jsr310-2.14.2.jar:/opt/kyuubi/jars/jackson-jaxrs-base-2.14.2.jar:/opt/kyuubi/jars/jackson-jaxrs-json-provider-2.14.2.jar:/opt/kyuubi/jars/jackson-module-jaxb-annotations-2.14.2.jar:/opt/kyuubi/jars/jackson-module-scala_2.12-2.14.2.jar:/opt/kyuubi/jars/jakarta.annotation-api-1.3.5.jar:/opt/kyuubi/jars/jakarta.inject-2.6.1.jar:/opt/kyuubi/jars/jakarta.servlet-api-4.0.4.jar:/opt/kyuubi/jars/jakarta.validation-api-2.0.2.jar:/opt/kyuubi/jars/jakarta.ws.rs-api-2.1.6.jar:/opt/kyuubi/jars/jakarta.xml.bind-api-2.3.2.jar:/opt/kyuubi/jars/javassist-3.25.0-GA.jar:/opt/kyuubi/jars/jcl-over-slf4j-1.7.36.jar:/opt/kyuubi/jars/jersey-client-2.39.jar:/opt/kyuubi/jars/jersey-common-2.39.jar:/opt/kyuubi/jars/jersey-container-servlet-core-2.39.jar:/opt/kyuubi/jars/jersey-entity-filtering-2.39.jar:/opt/kyuubi/jars/jersey-hk2-2.39.jar:/opt/kyuubi/jars/jersey-media-json-jackson-2.39.jar:/opt/kyuubi/jars/jersey-media-multipart-2.39.jar:/opt/kyuubi/jars/jersey-server-2.39.jar:/opt/kyuubi/jars/jetcd-api-0.7.3.jar:/opt/kyuubi/jars/jetcd-common-0.7.3.jar:/opt/kyuubi/jars/jetcd-core-0.7.3.jar:/opt/kyuubi/jars/jetcd-grpc-0.7.3.jar:/opt/kyuubi/jars/jetty-http-9.4.50.v20221201.jar:/opt/kyuubi/jars/jetty-io-9.4.50.v20221201.jar:/opt/kyuubi/jars/jetty-security-9.4.50.v20221201.jar:/opt/kyuubi/jars/jetty-server-9.4.50.v20221201.jar:/opt/kyuubi/jars/jetty-servlet-9.4.50.v20221201.jar:/opt/kyuubi/jars/jetty-util-9.4.50.v20221201.jar:/opt/kyuubi/jars/jetty-util-ajax-9.4.50.v20221201.jar:/opt/kyuubi/jars/jline-0.9.94.jar:/opt/kyuubi/jars/jul-to-slf4j-1.7.36.jar:/opt/kyuubi/jars/kubernetes-client-5.12.1.jar:/opt/kyuubi/jars/kubernetes-model-admissionregistration-5.12.1.jar:/opt/kyuubi/jars/kubernetes-model-apiextensions-5.12.1.jar:/opt/kyuubi/jars/kubernetes-model-apps-5.12.1.jar:/opt/kyuubi/jars/kubernetes-model-autoscaling-5.12.1.jar:/opt/kyuubi/jars/kubernetes-model-batch-5.12.1.jar:/opt/kyuubi/jars/kubernetes-model-common-5.12.1.jar:/opt/kyuubi/jars/kubernetes-model-certificates-5.12.1.jar:/opt/kyuubi/jars/kubernetes-model-coordination-5.12.1.jar:/opt/kyuubi/jars/kubernetes-model-core-5.12.1.jar:/opt/kyuubi/jars/kubernetes-model-discovery-5.12.1.jar:/opt/kyuubi/jars/kubernetes-model-events-5.12.1.jar:/opt/kyuubi/jars/kubernetes-model-extensions-5.12.1.jar:/opt/kyuubi/jars/kubernetes-model-flowcontrol-5.12.1.jar:/opt/kyuubi/jars/kubernetes-model-metrics-5.12.1.jar:/opt/kyuubi/jars/kubernetes-model-networking-5.12.1.jar:/opt/kyuubi/jars/kubernetes-model-node-5.12.1.jar:/opt/kyuubi/jars/kubernetes-model-policy-5.12.1.jar:/opt/kyuubi/jars/kubernetes-model-rbac-5.12.1.jar:/opt/kyuubi/jars/kubernetes-model-scheduling-5.12.1.jar:/opt/kyuubi/jars/kubernetes-model-storageclass-5.12.1.jar:/opt/kyuubi/jars/kyuubi-common_2.12-1.7.0.jar:/opt/kyuubi/jars/kyuubi-ctl_2.12-1.7.0.jar:/opt/kyuubi/jars/kyuubi-events_2.12-1.7.0.jar:/opt/kyuubi/jars/kyuubi-ha_2.12-1.7.0.jar:/opt/kyuubi/jars/kyuubi-metrics_2.12-1.7.0.jar:/opt/kyuubi/jars/kyuubi-rest-client-1.7.0.jar:/opt/kyuubi/jars/kyuubi-server-plugin-1.7.0.jar:/opt/kyuubi/jars/kyuubi-server_2.12-1.7.0.jar:/opt/kyuubi/jars/kyuubi-zookeeper_2.12-1.7.0.jar:/opt/kyuubi/jars/libfb303-0.9.3.jar:/opt/kyuubi/jars/libthrift-0.9.3.jar:/opt/kyuubi/jars/log4j-1.2-api-2.19.0.jar:/opt/kyuubi/jars/log4j-api-2.19.0.jar:/opt/kyuubi/jars/log4j-core-2.19.0.jar:/opt/kyuubi/jars/log4j-slf4j-impl-2.19.0.jar:/opt/kyuubi/jars/logging-interceptor-3.12.12.jar:/opt/kyuubi/jars/metrics-core-4.2.8.jar:/opt/kyuubi/jars/metrics-jmx-4.2.8.jar:/opt/kyuubi/jars/metrics-json-4.2.8.jar:/opt/kyuubi/jars/metrics-jvm-4.2.8.jar:/opt/kyuubi/jars/mimepull-1.9.15.jar:/opt/kyuubi/jars/netty-all-4.1.87.Final.jar:/opt/kyuubi/jars/netty-buffer-4.1.87.Final.jar:/opt/kyuubi/jars/netty-codec-4.1.87.Final.jar:/opt/kyuubi/jars/netty-codec-dns-4.1.87.Final.jar:/opt/kyuubi/jars/netty-codec-http-4.1.87.Final.jar:/opt/kyuubi/jars/netty-codec-http2-4.1.87.Final.jar:/opt/kyuubi/jars/netty-codec-socks-4.1.87.Final.jar:/opt/kyuubi/jars/netty-common-4.1.87.Final.jar:/opt/kyuubi/jars/netty-handler-4.1.87.Final.jar:/opt/kyuubi/jars/netty-handler-proxy-4.1.87.Final.jar:/opt/kyuubi/jars/netty-resolver-4.1.87.Final.jar:/opt/kyuubi/jars/netty-resolver-dns-4.1.87.Final.jar:/opt/kyuubi/jars/netty-transport-4.1.87.Final.jar:/opt/kyuubi/jars/netty-transport-classes-epoll-4.1.87.Final.jar:/opt/kyuubi/jars/netty-transport-native-epoll-4.1.87.Final-linux-aarch_64.jar:/opt/kyuubi/jars/netty-transport-native-epoll-4.1.87.Final-linux-x86_64.jar:/opt/kyuubi/jars/netty-transport-native-unix-common-4.1.87.Final.jar:/opt/kyuubi/jars/okhttp-3.12.12.jar:/opt/kyuubi/jars/okhttp-urlconnection-3.14.9.jar:/opt/kyuubi/jars/okio-1.15.0.jar:/opt/kyuubi/jars/osgi-resource-locator-1.0.3.jar:/opt/kyuubi/jars/paranamer-2.8.jar:/opt/kyuubi/jars/perfmark-api-0.25.0.jar:/opt/kyuubi/jars/proto-google-common-protos-2.9.0.jar:/opt/kyuubi/jars/protobuf-java-3.21.7.jar:/opt/kyuubi/jars/protobuf-java-util-3.21.7.jar:/opt/kyuubi/jars/scala-library-2.12.17.jar:/opt/kyuubi/jars/scopt_2.12-4.1.0.jar:/opt/kyuubi/jars/simpleclient-0.16.0.jar:/opt/kyuubi/jars/simpleclient_common-0.16.0.jar:/opt/kyuubi/jars/simpleclient_dropwizard-0.16.0.jar:/opt/kyuubi/jars/simpleclient_servlet-0.16.0.jar:/opt/kyuubi/jars/simpleclient_servlet_common-0.16.0.jar:/opt/kyuubi/jars/simpleclient_tracer_common-0.16.0.jar:/opt/kyuubi/jars/simpleclient_tracer_otel-0.16.0.jar:/opt/kyuubi/jars/simpleclient_tracer_otel_agent-0.16.0.jar:/opt/kyuubi/jars/slf4j-api-1.7.36.jar:/opt/kyuubi/jars/snakeyaml-1.33.jar:/opt/kyuubi/jars/swagger-annotations-2.2.1.jar:/opt/kyuubi/jars/swagger-core-2.2.1.jar:/opt/kyuubi/jars/swagger-integration-2.2.1.jar:/opt/kyuubi/jars/swagger-jaxrs2-2.2.1.jar:/opt/kyuubi/jars/swagger-models-2.2.1.jar:/opt/kyuubi/jars/swagger-ui-4.9.1.jar:/opt/kyuubi/jars/trino-client-363.jar:/opt/kyuubi/jars/units-1.6.jar:/opt/kyuubi/jars/vertx-core-4.3.2.jar:/opt/kyuubi/jars/vertx-grpc-4.3.2.jar:/opt/kyuubi/jars/zjsonpatch-0.3.0.jar:/opt/kyuubi/jars/zookeeper-3.4.14.jar:/opt/kyuubi/conf:
2023-03-29 15:04:58.309 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2023-03-29 15:04:58.309 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
2023-03-29 15:04:58.309 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
2023-03-29 15:04:58.309 INFO org.apache.zookeeper.ZooKeeper: Client environment:os.name=Linux
2023-03-29 15:04:58.309 INFO org.apache.zookeeper.ZooKeeper: Client environment:os.arch=amd64
2023-03-29 15:04:58.309 INFO org.apache.zookeeper.ZooKeeper: Client environment:os.version=5.4.226-129.415.amzn2.x86_64
2023-03-29 15:04:58.309 INFO org.apache.zookeeper.ZooKeeper: Client environment:user.name=kyuubi
2023-03-29 15:04:58.309 INFO org.apache.zookeeper.ZooKeeper: Client environment:user.home=/home/kyuubi
2023-03-29 15:04:58.310 INFO org.apache.zookeeper.ZooKeeper: Client environment:user.dir=/opt/kyuubi
2023-03-29 15:04:58.310 INFO org.apache.zookeeper.ZooKeeper: Initiating client connection, connectString=kyuubi-88df75948-4v5gx:2181 sessionTimeout=60000 watcher=org.apache.curator.ConnectionState@1706a5c9
2023-03-29 15:04:58.326 INFO org.apache.kyuubi.ha.client.KyuubiServiceDiscovery: Service[KyuubiServiceDiscovery] is initialized.
2023-03-29 15:04:58.327 INFO org.apache.kyuubi.server.KyuubiTBinaryFrontendService: Service[KyuubiTBinaryFrontend] is initialized.
2023-03-29 15:04:58.329 INFO org.apache.kyuubi.server.KyuubiServer: Service[KyuubiServer] is initialized.
2023-03-29 15:04:58.330 INFO org.apache.kyuubi.server.KinitAuxiliaryService: Service[KinitAuxiliaryService] is started.
2023-03-29 15:04:58.331 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server kyuubi-88df75948-4v5gx/10.2.45.168:2181. Will not attempt to authenticate using SASL (unknown error)
2023-03-29 15:04:58.332 INFO org.apache.kyuubi.server.PeriodicGCService: Service[PeriodicGCService] is started.
2023-03-29 15:04:58.334 INFO org.apache.kyuubi.metrics.JsonReporterService: Service[JsonReporterService] is started.
2023-03-29 15:04:58.334 INFO org.apache.kyuubi.metrics.MetricsSystem: Service[MetricsSystem] is started.
2023-03-29 15:04:58.334 INFO org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket connection from /10.2.45.168:51756
2023-03-29 15:04:58.337 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to kyuubi-88df75948-4v5gx/10.2.45.168:2181, initiating session
2023-03-29 15:04:58.356 INFO org.apache.kyuubi.engine.KyuubiApplicationManager: Service[KyuubiApplicationManager] is started.
2023-03-29 15:04:58.356 INFO org.apache.kyuubi.credentials.HadoopCredentialsManager: Service[HadoopCredentialsManager] is started.
2023-03-29 15:04:58.365 INFO org.apache.kyuubi.operation.KyuubiOperationManager: Service[KyuubiOperationManager] is started.
2023-03-29 15:04:58.365 INFO org.apache.kyuubi.session.KyuubiSessionManager: Service[KyuubiSessionManager] is started.
2023-03-29 15:04:58.365 INFO org.apache.kyuubi.server.KyuubiServer: Service[KyuubiBackendService] is started.
2023-03-29 15:04:58.368 INFO org.apache.zookeeper.server.ZooKeeperServer: Client attempting to establish new session at /10.2.45.168:51756
2023-03-29 15:04:58.377 INFO org.apache.zookeeper.server.persistence.FileTxnLog: Creating new log file: log.1
2023-03-29 15:04:58.377 INFO org.apache.kyuubi.server.KyuubiTBinaryFrontendService: Starting and exposing JDBC connection at: jdbc:hive2://localhost:10009/
2023-03-29 15:04:58.408 INFO org.apache.zookeeper.ClientCnxn: Session establishment complete on server kyuubi-88df75948-4v5gx/10.2.45.168:2181, sessionid = 0x1006c9c32bb0000, negotiated timeout = 60000
2023-03-29 15:04:58.412 INFO org.apache.zookeeper.server.ZooKeeperServer: Established session 0x1006c9c32bb0000 with negotiated timeout 60000 for client /10.2.45.168:51756
2023-03-29 15:04:58.432 INFO org.apache.curator.framework.state.ConnectionStateManager: State change: CONNECTED
2023-03-29 15:04:58.439 INFO org.apache.kyuubi.ha.client.zookeeper.ZookeeperDiscoveryClient: Zookeeper client connection state changed to: CONNECTED
2023-03-29 15:04:58.527 INFO org.apache.kyuubi.ha.client.zookeeper.ZookeeperDiscoveryClient: Created a /kyuubi/serviceUri=localhost:10009;version=1.7.0;sequence=0000000000 on ZooKeeper for KyuubiServer uri: localhost:10009
2023-03-29 15:04:58.531 INFO org.apache.kyuubi.ha.client.KyuubiServiceDiscovery: Service[KyuubiServiceDiscovery] is started.
2023-03-29 15:04:58.531 INFO org.apache.kyuubi.server.KyuubiTBinaryFrontendService: Service[KyuubiTBinaryFrontend] is started.
2023-03-29 15:04:58.531 INFO org.apache.kyuubi.server.KyuubiServer: Service[KyuubiServer] is started.
2023-03-29 15:04:58.548 INFO org.apache.kyuubi.Utils: Loading Kyuubi properties from /opt/kyuubi/conf/kyuubi-defaults.conf
2023-03-29 15:05:40.191 INFO org.apache.kyuubi.server.KyuubiTBinaryFrontendService: Client protocol version: HIVE_CLI_SERVICE_PROTOCOL_V10
2023-03-29 15:05:40.203 INFO org.apache.kyuubi.session.KyuubiSessionManager: Opening session for anonymous@127.0.0.1
2023-03-29 15:05:40.217 WARN org.apache.kyuubi.config.KyuubiConf: The Kyuubi config 'kyuubi.frontend.bind.port' has been deprecated in Kyuubi v1.4.0 and may be removed in the future. Use kyuubi.frontend.thrift.binary.bind.port instead
2023-03-29 15:05:40.270 INFO org.apache.kyuubi.operation.log.OperationLog: Creating operation log file /opt/kyuubi/work/server_operation_logs/1d0aff22-2bd5-4370-aea4-71c5f154a79c/81487221-3089-424c-8f5a-74cd72e11879
2023-03-29 15:05:40.293 INFO org.apache.kyuubi.operation.LaunchEngine: Processing anonymous's query[81487221-3089-424c-8f5a-74cd72e11879]: PENDING_STATE -> RUNNING_STATE, statement:
LaunchEngine
2023-03-29 15:05:40.298 INFO org.apache.kyuubi.session.KyuubiSessionManager: anonymous's session with SessionHandle [1d0aff22-2bd5-4370-aea4-71c5f154a79c] is opened, current opening sessions 1
2023-03-29 15:05:40.301 INFO org.apache.curator.framework.imps.CuratorFrameworkImpl: Starting
2023-03-29 15:05:40.302 INFO org.apache.zookeeper.ZooKeeper: Initiating client connection, connectString=kyuubi-88df75948-4v5gx:2181 sessionTimeout=60000 watcher=org.apache.curator.ConnectionState@3a73db06
2023-03-29 15:05:40.308 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server kyuubi-88df75948-4v5gx/10.2.45.168:2181. Will not attempt to authenticate using SASL (unknown error)
2023-03-29 15:05:40.314 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to kyuubi-88df75948-4v5gx/10.2.45.168:2181, initiating session
2023-03-29 15:05:40.314 INFO org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket connection from /10.2.45.168:46058
2023-03-29 15:05:40.321 INFO org.apache.zookeeper.server.ZooKeeperServer: Client attempting to establish new session at /10.2.45.168:46058
2023-03-29 15:05:40.332 INFO org.apache.zookeeper.server.ZooKeeperServer: Established session 0x1006c9c32bb0001 with negotiated timeout 60000 for client /10.2.45.168:46058
2023-03-29 15:05:40.336 INFO org.apache.zookeeper.ClientCnxn: Session establishment complete on server kyuubi-88df75948-4v5gx/10.2.45.168:2181, sessionid = 0x1006c9c32bb0001, negotiated timeout = 60000
2023-03-29 15:05:40.337 INFO org.apache.curator.framework.state.ConnectionStateManager: State change: CONNECTED
2023-03-29 15:05:40.362 WARN org.apache.hadoop.security.ShellBasedUnixGroupsMapping: unable to return groups for user anonymous
org.apache.hadoop.security.ShellBasedUnixGroupsMapping$PartialGroupNameException: The user name 'anonymous' is not found. id: ‘anonymous’: no such user
id: ‘anonymous’: no such user

    at org.apache.hadoop.security.ShellBasedUnixGroupsMapping.resolvePartialGroupNames(ShellBasedUnixGroupsMapping.java:294) ~[hadoop-client-api-3.3.4.jar:?]
    at org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:207) ~[hadoop-client-api-3.3.4.jar:?]
    at org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:97) ~[hadoop-client-api-3.3.4.jar:?]
    at org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:51) ~[hadoop-client-api-3.3.4.jar:?]
    at org.apache.hadoop.security.Groups$GroupCacheLoader.fetchGroupList(Groups.java:387) ~[hadoop-client-api-3.3.4.jar:?]
    at org.apache.hadoop.security.Groups$GroupCacheLoader.load(Groups.java:321) ~[hadoop-client-api-3.3.4.jar:?]
    at org.apache.hadoop.security.Groups$GroupCacheLoader.load(Groups.java:270) ~[hadoop-client-api-3.3.4.jar:?]
    at org.apache.hadoop.thirdparty.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3529) ~[hadoop-client-runtime-3.3.4.jar:1.1.1]
    at org.apache.hadoop.thirdparty.com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2278) ~[hadoop-client-runtime-3.3.4.jar:1.1.1]
    at org.apache.hadoop.thirdparty.com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2155) ~[hadoop-client-runtime-3.3.4.jar:1.1.1]
    at org.apache.hadoop.thirdparty.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2045) ~[hadoop-client-runtime-3.3.4.jar:1.1.1]
    at org.apache.hadoop.thirdparty.com.google.common.cache.LocalCache.get(LocalCache.java:3962) ~[hadoop-client-runtime-3.3.4.jar:1.1.1]
    at org.apache.hadoop.thirdparty.com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3985) ~[hadoop-client-runtime-3.3.4.jar:1.1.1]
    at org.apache.hadoop.thirdparty.com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4946) ~[hadoop-client-runtime-3.3.4.jar:1.1.1]
    at org.apache.hadoop.security.Groups.getGroups(Groups.java:228) ~[hadoop-client-api-3.3.4.jar:?]
    at org.apache.hadoop.security.UserGroupInformation.getGroups(UserGroupInformation.java:1734) ~[hadoop-client-api-3.3.4.jar:?]
    at org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1722) ~[hadoop-client-api-3.3.4.jar:?]
    at org.apache.kyuubi.session.HadoopGroupProvider.groups(HadoopGroupProvider.scala:36) ~[kyuubi-server_2.12-1.7.0.jar:1.7.0]
    at org.apache.kyuubi.session.HadoopGroupProvider.primaryGroup(HadoopGroupProvider.scala:33) ~[kyuubi-server_2.12-1.7.0.jar:1.7.0]
    at org.apache.kyuubi.session.KyuubiSessionImpl.engine$lzycompute(KyuubiSessionImpl.scala:80) ~[kyuubi-server_2.12-1.7.0.jar:1.7.0]
    at org.apache.kyuubi.session.KyuubiSessionImpl.engine(KyuubiSessionImpl.scala:77) ~[kyuubi-server_2.12-1.7.0.jar:1.7.0]
    at org.apache.kyuubi.session.KyuubiSessionImpl.renewEngineCredentials(KyuubiSessionImpl.scala:240) ~[kyuubi-server_2.12-1.7.0.jar:1.7.0]
    at org.apache.kyuubi.session.KyuubiSessionImpl.engineCredentials$lzycompute(KyuubiSessionImpl.scala:75) ~[kyuubi-server_2.12-1.7.0.jar:1.7.0]
    at org.apache.kyuubi.session.KyuubiSessionImpl.engineCredentials(KyuubiSessionImpl.scala:75) ~[kyuubi-server_2.12-1.7.0.jar:1.7.0]
    at org.apache.kyuubi.session.KyuubiSessionImpl.$anonfun$openEngineSession$2(KyuubiSessionImpl.scala:126) ~[kyuubi-server_2.12-1.7.0.jar:1.7.0]
    at org.apache.kyuubi.session.KyuubiSessionImpl.$anonfun$openEngineSession$2$adapted(KyuubiSessionImpl.scala:123) ~[kyuubi-server_2.12-1.7.0.jar:1.7.0]
    at org.apache.kyuubi.ha.client.DiscoveryClientProvider$.withDiscoveryClient(DiscoveryClientProvider.scala:36) ~[kyuubi-ha_2.12-1.7.0.jar:1.7.0]
    at org.apache.kyuubi.session.KyuubiSessionImpl.$anonfun$openEngineSession$1(KyuubiSessionImpl.scala:123) ~[kyuubi-server_2.12-1.7.0.jar:1.7.0]
    at org.apache.kyuubi.session.KyuubiSession.handleSessionException(KyuubiSession.scala:49) ~[kyuubi-server_2.12-1.7.0.jar:1.7.0]
    at org.apache.kyuubi.session.KyuubiSessionImpl.openEngineSession(KyuubiSessionImpl.scala:123) ~[kyuubi-server_2.12-1.7.0.jar:1.7.0]
    at org.apache.kyuubi.operation.LaunchEngine.$anonfun$runInternal$2(LaunchEngine.scala:60) ~[kyuubi-server_2.12-1.7.0.jar:1.7.0]
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_362]
    at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_362]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_362]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_362]
    at java.lang.Thread.run(Thread.java:750) ~[?:1.8.0_362]
2023-03-29 15:05:40.392 WARN org.apache.kyuubi.session.HadoopGroupProvider: There is no group for anonymous, use the client user name as group directly
2023-03-29 15:05:40.427 INFO org.apache.zookeeper.server.PrepRequestProcessor: Got user-level KeeperException when processing sessionid:0x1006c9c32bb0001 type:create cxid:0x2 zxid:0x5 txntype:-1 reqpath:n/a Error Path:/kyuubi_1.7.0_USER_SPARK_SQL_lock/anonymous/default/locks Error:KeeperErrorCode = NoNode for /kyuubi_1.7.0_USER_SPARK_SQL_lock/anonymous/default/locks
2023-03-29 15:05:40.432 WARN org.apache.curator.utils.ZKPaths: The version of ZooKeeper being used doesn't support Container nodes. CreateMode.PERSISTENT will be used instead.
2023-03-29 15:05:40.460 INFO org.apache.zookeeper.server.PrepRequestProcessor: Got user-level KeeperException when processing sessionid:0x1006c9c32bb0001 type:create cxid:0xd zxid:0xb txntype:-1 reqpath:n/a Error Path:/kyuubi_1.7.0_USER_SPARK_SQL_lock/anonymous/default/leases Error:KeeperErrorCode = NoNode for /kyuubi_1.7.0_USER_SPARK_SQL_lock/anonymous/default/leases
2023-03-29 15:05:40.506 INFO org.apache.kyuubi.engine.ProcBuilder: Creating anonymous's working directory at /opt/kyuubi/work/anonymous
2023-03-29 15:05:40.535 INFO org.apache.kyuubi.engine.EngineRef: Launching engine:
/opt/kyuubi/externals/spark-3.3.2-bin-hadoop3/bin/spark-submit \
    --class org.apache.kyuubi.engine.spark.SparkSQLEngine \
    --conf spark.hive.server2.thrift.resultset.default.fetch.size=1000 \
    --conf spark.kyuubi.client.ipAddress=192.168.178.43 \
    --conf spark.kyuubi.client.version=1.7.0 \
    --conf spark.kyuubi.engine.submit.time=1680102340489 \
    --conf spark.kyuubi.frontend.protocols=THRIFT_BINARY \
    --conf spark.kyuubi.ha.addresses=kyuubi-88df75948-4v5gx:2181 \
    --conf spark.kyuubi.ha.engine.ref.id=1d0aff22-2bd5-4370-aea4-71c5f154a79c \
    --conf spark.kyuubi.ha.namespace=/kyuubi_1.7.0_USER_SPARK_SQL/anonymous/default \
    --conf spark.kyuubi.ha.zookeeper.auth.type=NONE \
    --conf spark.kyuubi.kubernetes.namespace=kyuubi \
    --conf spark.kyuubi.server.ipAddress=127.0.0.1 \
    --conf spark.kyuubi.session.connection.url=localhost:10009 \
    --conf spark.kyuubi.session.real.user=anonymous \
    --conf spark.app.name=kyuubi_USER_SPARK_SQL_anonymous_default_1d0aff22-2bd5-4370-aea4-71c5f154a79c \
    --conf spark.kubernetes.driver.label.kyuubi-unique-tag=1d0aff22-2bd5-4370-aea4-71c5f154a79c \
    --conf spark.master=k8s://https://172.20.0.1:443 \
    --conf spark.kubernetes.driverEnv.SPARK_USER_NAME=anonymous \
    --conf spark.executorEnv.SPARK_USER_NAME=anonymous \
    --proxy-user anonymous /opt/kyuubi/externals/engines/spark/kyuubi-spark-sql-engine_2.12-1.7.0.jar
2023-03-29 15:05:40.550 INFO org.apache.kyuubi.engine.ProcBuilder: Logging to /opt/kyuubi/work/anonymous/kyuubi-spark-sql-engine.log.0
2023-03-29 15:08:41.793 WARN org.apache.kyuubi.engine.KubernetesApplicationOperation: Get Tag: 1d0aff22-2bd5-4370-aea4-71c5f154a79c Driver Pod In Kubernetes size: 0, we expect 1
2023-03-29 15:08:42.151 INFO org.apache.curator.framework.imps.CuratorFrameworkImpl: backgroundOperationsLoop exiting
2023-03-29 15:08:42.153 INFO org.apache.zookeeper.server.PrepRequestProcessor: Processed session termination for sessionid: 0x1006c9c32bb0001
2023-03-29 15:08:42.161 INFO org.apache.zookeeper.ZooKeeper: Session: 0x1006c9c32bb0001 closed
2023-03-29 15:08:42.161 INFO org.apache.zookeeper.ClientCnxn: EventThread shut down for session: 0x1006c9c32bb0001
2023-03-29 15:08:42.161 INFO org.apache.zookeeper.server.NIOServerCnxn: Closed socket connection for client /10.2.45.168:46058 which had sessionid 0x1006c9c32bb0001
2023-03-29 15:08:42.168 INFO org.apache.kyuubi.operation.LaunchEngine: Processing anonymous's query[81487221-3089-424c-8f5a-74cd72e11879]: RUNNING_STATE -> ERROR_STATE, time taken: 181.874 seconds
2023-03-29 15:08:42.289 INFO org.apache.kyuubi.server.KyuubiTBinaryFrontendService: Received request of closing SessionHandle [1d0aff22-2bd5-4370-aea4-71c5f154a79c]
2023-03-29 15:08:42.291 INFO org.apache.kyuubi.session.KyuubiSessionManager: anonymous's session with SessionHandle [1d0aff22-2bd5-4370-aea4-71c5f154a79c] is closed, current opening sessions 0
2023-03-29 15:08:42.306 INFO org.apache.kyuubi.server.KyuubiTBinaryFrontendService: Finished closing SessionHandle [1d0aff22-2bd5-4370-aea4-71c5f154a79c]

DBeaver logs

Could not open client transport with JDBC Uri: jdbc:hive2://127.0.0.1:10009: org.apache.kyuubi.KyuubiSQLException: 
The engine application has been terminated. Please check the engine log.
ApplicationInfo: (
name -> null,
state -> NOT_FOUND,
url -> null,
id -> null,
error -> null
)

    at org.apache.kyuubi.engine.EngineRef.$anonfun$create$6(EngineRef.scala:230)
    at org.apache.kyuubi.engine.EngineRef.$anonfun$create$6$adapted(EngineRef.scala:219)
    at scala.Option.foreach(Option.scala:407)
    at org.apache.kyuubi.engine.EngineRef.$anonfun$create$5(EngineRef.scala:219)
    at org.apache.kyuubi.engine.EngineRef.$anonfun$create$5$adapted(EngineRef.scala:218)
    at scala.Option.foreach(Option.scala:407)
    at org.apache.kyuubi.engine.EngineRef.$anonfun$create$1(EngineRef.scala:218)
    at org.apache.kyuubi.ha.client.zookeeper.ZookeeperDiscoveryClient.tryWithLock(ZookeeperDiscoveryClient.scala:180)
    at org.apache.kyuubi.engine.EngineRef.tryWithLock(EngineRef.scala:166)
    at org.apache.kyuubi.engine.EngineRef.create(EngineRef.scala:171)
    at org.apache.kyuubi.engine.EngineRef.$anonfun$getOrCreate$1(EngineRef.scala:266)
    at scala.Option.getOrElse(Option.scala:189)
    at org.apache.kyuubi.engine.EngineRef.getOrCreate(EngineRef.scala:266)
    at org.apache.kyuubi.session.KyuubiSessionImpl.$anonfun$openEngineSession$2(KyuubiSessionImpl.scala:147)
    at org.apache.kyuubi.session.KyuubiSessionImpl.$anonfun$openEngineSession$2$adapted(KyuubiSessionImpl.scala:123)
    at org.apache.kyuubi.ha.client.DiscoveryClientProvider$.withDiscoveryClient(DiscoveryClientProvider.scala:36)
    at org.apache.kyuubi.session.KyuubiSessionImpl.$anonfun$openEngineSession$1(KyuubiSessionImpl.scala:123)
    at org.apache.kyuubi.session.KyuubiSession.handleSessionException(KyuubiSession.scala:49)
    at org.apache.kyuubi.session.KyuubiSessionImpl.openEngineSession(KyuubiSessionImpl.scala:123)
    at org.apache.kyuubi.operation.LaunchEngine.$anonfun$runInternal$2(LaunchEngine.scala:60)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:750)
Caused by: org.apache.kyuubi.KyuubiSQLException: org.apache.kyuubi.KyuubiSQLException: io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://172.20.0.1/api/v1/namespaces/default/pods?labelSelector=spark-app-selector%3Dspark-de265d35dd144a23b24be9d979e75884%2Cspark-role%3Dexecutor&allowWatchBookmarks=true&watch=true. Message: Forbidden.
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:682)
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:661)
    at io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager.lambda$run$2(WatchConnectionManager.java:126)
    at java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:836)
    at java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:811)
    at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
    at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1990)
    at io.fabric8.kubernetes.client.okhttp.OkHttpWebSocketImpl$BuilderImpl$1.onFailure(OkHttpWebSocketImpl.java:66)
    at okhttp3.internal.ws.RealWebSocket.failWebSocket(RealWebSocket.java:571)
    at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:198)
    at okhttp3.RealCall$AsyncCall.execute(RealCall.java:203)
    at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:750)
 See more: /opt/kyuubi/work/anonymous/kyuubi-spark-sql-engine.log.0
    at org.apache.kyuubi.KyuubiSQLException$.apply(KyuubiSQLException.scala:69)
    at org.apache.kyuubi.engine.ProcBuilder.$anonfun$start$1(ProcBuilder.scala:229)
    at java.lang.Thread.run(Thread.java:750)
.
FYI: The last 10 line(s) of log are:
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
        at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:163)
        at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
        at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
        at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1046)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1055)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
23/03/29 15:08:40 INFO ShutdownHookManager: Shutdown hook called
23/03/29 15:08:40 INFO ShutdownHookManager: Deleting directory /tmp/spark-5630ff13-cd25-4944-8c78-a3859e6cfb7e
23/03/29 15:08:40 INFO ShutdownHookManager: Deleting directory /tmp/spark-fa6a2d02-82a7-4965-b7a8-487957b15a08
    at org.apache.kyuubi.KyuubiSQLException$.apply(KyuubiSQLException.scala:69)
    at org.apache.kyuubi.engine.ProcBuilder.getError(ProcBuilder.scala:275)
    at org.apache.kyuubi.engine.ProcBuilder.getError$(ProcBuilder.scala:264)
    at org.apache.kyuubi.engine.spark.SparkProcessBuilder.getError(SparkProcessBuilder.scala:37)
    ... 25 more

  org.apache.kyuubi.KyuubiSQLException: 
The engine application has been terminated. Please check the engine log.
ApplicationInfo: (
name -> null,
state -> NOT_FOUND,
url -> null,
id -> null,
error -> null
)

    at org.apache.kyuubi.engine.EngineRef.$anonfun$create$6(EngineRef.scala:230)
    at org.apache.kyuubi.engine.EngineRef.$anonfun$create$6$adapted(EngineRef.scala:219)
    at scala.Option.foreach(Option.scala:407)
    at org.apache.kyuubi.engine.EngineRef.$anonfun$create$5(EngineRef.scala:219)
    at org.apache.kyuubi.engine.EngineRef.$anonfun$create$5$adapted(EngineRef.scala:218)
    at scala.Option.foreach(Option.scala:407)
    at org.apache.kyuubi.engine.EngineRef.$anonfun$create$1(EngineRef.scala:218)
    at org.apache.kyuubi.ha.client.zookeeper.ZookeeperDiscoveryClient.tryWithLock(ZookeeperDiscoveryClient.scala:180)
    at org.apache.kyuubi.engine.EngineRef.tryWithLock(EngineRef.scala:166)
    at org.apache.kyuubi.engine.EngineRef.create(EngineRef.scala:171)
    at org.apache.kyuubi.engine.EngineRef.$anonfun$getOrCreate$1(EngineRef.scala:266)
    at scala.Option.getOrElse(Option.scala:189)
    at org.apache.kyuubi.engine.EngineRef.getOrCreate(EngineRef.scala:266)
    at org.apache.kyuubi.session.KyuubiSessionImpl.$anonfun$openEngineSession$2(KyuubiSessionImpl.scala:147)
    at org.apache.kyuubi.session.KyuubiSessionImpl.$anonfun$openEngineSession$2$adapted(KyuubiSessionImpl.scala:123)
    at org.apache.kyuubi.ha.client.DiscoveryClientProvider$.withDiscoveryClient(DiscoveryClientProvider.scala:36)
    at org.apache.kyuubi.session.KyuubiSessionImpl.$anonfun$openEngineSession$1(KyuubiSessionImpl.scala:123)
    at org.apache.kyuubi.session.KyuubiSession.handleSessionException(KyuubiSession.scala:49)
    at org.apache.kyuubi.session.KyuubiSessionImpl.openEngineSession(KyuubiSessionImpl.scala:123)
    at org.apache.kyuubi.operation.LaunchEngine.$anonfun$runInternal$2(LaunchEngine.scala:60)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:750)
Caused by: org.apache.kyuubi.KyuubiSQLException: org.apache.kyuubi.KyuubiSQLException: io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://172.20.0.1/api/v1/namespaces/default/pods?labelSelector=spark-app-selector%3Dspark-de265d35dd144a23b24be9d979e75884%2Cspark-role%3Dexecutor&allowWatchBookmarks=true&watch=true. Message: Forbidden.
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:682)
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:661)
    at io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager.lambda$run$2(WatchConnectionManager.java:126)
    at java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:836)
    at java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:811)
    at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
    at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1990)
    at io.fabric8.kubernetes.client.okhttp.OkHttpWebSocketImpl$BuilderImpl$1.onFailure(OkHttpWebSocketImpl.java:66)
    at okhttp3.internal.ws.RealWebSocket.failWebSocket(RealWebSocket.java:571)
    at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:198)
    at okhttp3.RealCall$AsyncCall.execute(RealCall.java:203)
    at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:750)
 See more: /opt/kyuubi/work/anonymous/kyuubi-spark-sql-engine.log.0
    at org.apache.kyuubi.KyuubiSQLException$.apply(KyuubiSQLException.scala:69)
    at org.apache.kyuubi.engine.ProcBuilder.$anonfun$start$1(ProcBuilder.scala:229)
    at java.lang.Thread.run(Thread.java:750)
.
FYI: The last 10 line(s) of log are:
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
        at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:163)
        at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
        at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
        at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1046)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1055)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
23/03/29 15:08:40 INFO ShutdownHookManager: Shutdown hook called
23/03/29 15:08:40 INFO ShutdownHookManager: Deleting directory /tmp/spark-5630ff13-cd25-4944-8c78-a3859e6cfb7e
23/03/29 15:08:40 INFO ShutdownHookManager: Deleting directory /tmp/spark-fa6a2d02-82a7-4965-b7a8-487957b15a08
    at org.apache.kyuubi.KyuubiSQLException$.apply(KyuubiSQLException.scala:69)
    at org.apache.kyuubi.engine.ProcBuilder.getError(ProcBuilder.scala:275)
    at org.apache.kyuubi.engine.ProcBuilder.getError$(ProcBuilder.scala:264)
    at org.apache.kyuubi.engine.spark.SparkProcessBuilder.getError(SparkProcessBuilder.scala:37)
    ... 25 more

  org.apache.kyuubi.KyuubiSQLException: 
The engine application has been terminated. Please check the engine log.
ApplicationInfo: (
name -> null,
state -> NOT_FOUND,
url -> null,
id -> null,
error -> null
)

    at org.apache.kyuubi.engine.EngineRef.$anonfun$create$6(EngineRef.scala:230)
    at org.apache.kyuubi.engine.EngineRef.$anonfun$create$6$adapted(EngineRef.scala:219)
    at scala.Option.foreach(Option.scala:407)
    at org.apache.kyuubi.engine.EngineRef.$anonfun$create$5(EngineRef.scala:219)
    at org.apache.kyuubi.engine.EngineRef.$anonfun$create$5$adapted(EngineRef.scala:218)
    at scala.Option.foreach(Option.scala:407)
    at org.apache.kyuubi.engine.EngineRef.$anonfun$create$1(EngineRef.scala:218)
    at org.apache.kyuubi.ha.client.zookeeper.ZookeeperDiscoveryClient.tryWithLock(ZookeeperDiscoveryClient.scala:180)
    at org.apache.kyuubi.engine.EngineRef.tryWithLock(EngineRef.scala:166)
    at org.apache.kyuubi.engine.EngineRef.create(EngineRef.scala:171)
    at org.apache.kyuubi.engine.EngineRef.$anonfun$getOrCreate$1(EngineRef.scala:266)
    at scala.Option.getOrElse(Option.scala:189)
    at org.apache.kyuubi.engine.EngineRef.getOrCreate(EngineRef.scala:266)
    at org.apache.kyuubi.session.KyuubiSessionImpl.$anonfun$openEngineSession$2(KyuubiSessionImpl.scala:147)
    at org.apache.kyuubi.session.KyuubiSessionImpl.$anonfun$openEngineSession$2$adapted(KyuubiSessionImpl.scala:123)
    at org.apache.kyuubi.ha.client.DiscoveryClientProvider$.withDiscoveryClient(DiscoveryClientProvider.scala:36)
    at org.apache.kyuubi.session.KyuubiSessionImpl.$anonfun$openEngineSession$1(KyuubiSessionImpl.scala:123)
    at org.apache.kyuubi.session.KyuubiSession.handleSessionException(KyuubiSession.scala:49)
    at org.apache.kyuubi.session.KyuubiSessionImpl.openEngineSession(KyuubiSessionImpl.scala:123)
    at org.apache.kyuubi.operation.LaunchEngine.$anonfun$runInternal$2(LaunchEngine.scala:60)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:750)
Caused by: org.apache.kyuubi.KyuubiSQLException: org.apache.kyuubi.KyuubiSQLException: io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://172.20.0.1/api/v1/namespaces/default/pods?labelSelector=spark-app-selector%3Dspark-de265d35dd144a23b24be9d979e75884%2Cspark-role%3Dexecutor&allowWatchBookmarks=true&watch=true. Message: Forbidden.
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:682)
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:661)
    at io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager.lambda$run$2(WatchConnectionManager.java:126)
    at java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:836)
    at java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:811)
    at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
    at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1990)
    at io.fabric8.kubernetes.client.okhttp.OkHttpWebSocketImpl$BuilderImpl$1.onFailure(OkHttpWebSocketImpl.java:66)
    at okhttp3.internal.ws.RealWebSocket.failWebSocket(RealWebSocket.java:571)
    at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:198)
    at okhttp3.RealCall$AsyncCall.execute(RealCall.java:203)
    at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:750)
 See more: /opt/kyuubi/work/anonymous/kyuubi-spark-sql-engine.log.0
    at org.apache.kyuubi.KyuubiSQLException$.apply(KyuubiSQLException.scala:69)
    at org.apache.kyuubi.engine.ProcBuilder.$anonfun$start$1(ProcBuilder.scala:229)
    at java.lang.Thread.run(Thread.java:750)
.
FYI: The last 10 line(s) of log are:
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
        at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:163)
        at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
        at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
        at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1046)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1055)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
23/03/29 15:08:40 INFO ShutdownHookManager: Shutdown hook called
23/03/29 15:08:40 INFO ShutdownHookManager: Deleting directory /tmp/spark-5630ff13-cd25-4944-8c78-a3859e6cfb7e
23/03/29 15:08:40 INFO ShutdownHookManager: Deleting directory /tmp/spark-fa6a2d02-82a7-4965-b7a8-487957b15a08
    at org.apache.kyuubi.KyuubiSQLException$.apply(KyuubiSQLException.scala:69)
    at org.apache.kyuubi.engine.ProcBuilder.getError(ProcBuilder.scala:275)
    at org.apache.kyuubi.engine.ProcBuilder.getError$(ProcBuilder.scala:264)
    at org.apache.kyuubi.engine.spark.SparkProcessBuilder.getError(SparkProcessBuilder.scala:37)
    ... 25 more

Work logs

Also this spark error occures?

kyuubi@kyuubi-88df75948-4v5gx:/opt/kyuubi$ cat work/anonymous/kyuubi-spark-sql-engine.log.0 
23/03/29 15:05:43 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
23/03/29 15:05:44 INFO SignalRegister: Registering signal handler for TERM
23/03/29 15:05:44 INFO SignalRegister: Registering signal handler for HUP
23/03/29 15:05:44 INFO SignalRegister: Registering signal handler for INT
23/03/29 15:05:44 INFO HiveConf: Found configuration file null
23/03/29 15:05:44 INFO SparkContext: Running Spark version 3.3.2
23/03/29 15:05:44 INFO ResourceUtils: ==============================================================
23/03/29 15:05:44 INFO ResourceUtils: No custom resources configured for spark.driver.
23/03/29 15:05:44 INFO ResourceUtils: ==============================================================
23/03/29 15:05:44 INFO SparkContext: Submitted application: kyuubi_USER_SPARK_SQL_anonymous_default_1d0aff22-2bd5-4370-aea4-71c5f154a79c
23/03/29 15:05:44 INFO ResourceProfile: Default ResourceProfile created, executor resources: Map(cores -> name: cores, amount: 1, script: , vendor: , memory -> name: memory, amount: 1024, script: , vendor: , offHeap -> name: offHeap, amount: 0, script: , vendor: ), task resources: Map(cpus -> name: cpus, amount: 1.0)
23/03/29 15:05:44 INFO ResourceProfile: Limiting resource is cpus at 1 tasks per executor
23/03/29 15:05:44 INFO ResourceProfileManager: Added ResourceProfile id: 0
23/03/29 15:05:44 INFO SecurityManager: Changing view acls to: kyuubi,anonymous
23/03/29 15:05:44 INFO SecurityManager: Changing modify acls to: kyuubi,anonymous
23/03/29 15:05:44 INFO SecurityManager: Changing view acls groups to: 
23/03/29 15:05:44 INFO SecurityManager: Changing modify acls groups to: 
23/03/29 15:05:44 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(kyuubi, anonymous); groups with view permissions: Set(); users  with modify permissions: Set(kyuubi, anonymous); groups with modify permissions: Set()
23/03/29 15:05:45 INFO Utils: Successfully started service 'sparkDriver' on port 34783.
23/03/29 15:05:45 INFO SparkEnv: Registering MapOutputTracker
23/03/29 15:05:45 INFO SparkEnv: Registering BlockManagerMaster
23/03/29 15:05:45 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
23/03/29 15:05:45 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
23/03/29 15:05:45 INFO SparkEnv: Registering BlockManagerMasterHeartbeat
23/03/29 15:05:45 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-3d826bde-2af8-4d6a-b559-f32254ee744e
23/03/29 15:05:45 INFO MemoryStore: MemoryStore started with capacity 413.9 MiB
23/03/29 15:05:45 INFO SparkEnv: Registering OutputCommitCoordinator
23/03/29 15:05:45 INFO Utils: Successfully started service 'SparkUI' on port 44019.
23/03/29 15:05:45 INFO SparkContext: Added JAR file:/opt/kyuubi/externals/engines/spark/kyuubi-spark-sql-engine_2.12-1.7.0.jar at spark://10.2.45.168:34783/jars/kyuubi-spark-sql-engine_2.12-1.7.0.jar with timestamp 1680102344497
23/03/29 15:05:45 INFO SparkKubernetesClientFactory: Auto-configuring K8S client using current context from users K8S config file
23/03/29 15:05:46 INFO ExecutorPodsAllocator: Going to request 2 executors from Kubernetes for ResourceProfile Id: 0, target: 2, known: 0, sharedSlotFromPendingPods: 2147483647.
23/03/29 15:05:46 WARN ExecutorPodsSnapshotsStoreImpl: Exception when notifying snapshot subscriber.
org.apache.spark.SparkException: Must specify the executor container image
        at org.apache.spark.deploy.k8s.features.BasicExecutorFeatureStep.$anonfun$executorContainerImage$1(BasicExecutorFeatureStep.scala:44)
        at scala.Option.getOrElse(Option.scala:189)
        at org.apache.spark.deploy.k8s.features.BasicExecutorFeatureStep.<init>(BasicExecutorFeatureStep.scala:44)
        at org.apache.spark.scheduler.cluster.k8s.KubernetesExecutorBuilder.buildFromFeatures(KubernetesExecutorBuilder.scala:69)
        at org.apache.spark.scheduler.cluster.k8s.ExecutorPodsAllocator.$anonfun$requestNewExecutors$1(ExecutorPodsAllocator.scala:398)
        at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:158)
        at org.apache.spark.scheduler.cluster.k8s.ExecutorPodsAllocator.requestNewExecutors(ExecutorPodsAllocator.scala:389)
        at org.apache.spark.scheduler.cluster.k8s.ExecutorPodsAllocator.$anonfun$onNewSnapshots$35(ExecutorPodsAllocator.scala:349)
        at org.apache.spark.scheduler.cluster.k8s.ExecutorPodsAllocator.$anonfun$onNewSnapshots$35$adapted(ExecutorPodsAllocator.scala:342)
        at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
        at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
        at org.apache.spark.scheduler.cluster.k8s.ExecutorPodsAllocator.onNewSnapshots(ExecutorPodsAllocator.scala:342)
        at org.apache.spark.scheduler.cluster.k8s.ExecutorPodsAllocator.$anonfun$start$3(ExecutorPodsAllocator.scala:120)
        at org.apache.spark.scheduler.cluster.k8s.ExecutorPodsAllocator.$anonfun$start$3$adapted(ExecutorPodsAllocator.scala:120)
        at org.apache.spark.scheduler.cluster.k8s.ExecutorPodsSnapshotsStoreImpl$SnapshotsSubscriber.org$apache$spark$scheduler$cluster$k8s$ExecutorPodsSnapshotsStoreImpl$SnapshotsSubscriber$$processSnapshotsInternal(ExecutorPodsSnapshotsStoreImpl.scala:138)
        at org.apache.spark.scheduler.cluster.k8s.ExecutorPodsSnapshotsStoreImpl$SnapshotsSubscriber.processSnapshots(ExecutorPodsSnapshotsStoreImpl.scala:126)
        at org.apache.spark.scheduler.cluster.k8s.ExecutorPodsSnapshotsStoreImpl.$anonfun$addSubscriber$1(ExecutorPodsSnapshotsStoreImpl.scala:81)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:750)
23/03/29 15:05:46 INFO ExecutorPodsAllocator: Going to request 2 executors from Kubernetes for ResourceProfile Id: 0, target: 2, known: 0, sharedSlotFromPendingPods: 2147483647.
23/03/29 15:05:46 WARN ExecutorPodsSnapshotsStoreImpl: Exception when notifying snapshot subscriber.
org.apache.spark.SparkException: Must specify the executor container image
        at org.apache.spark.deploy.k8s.features.BasicExecutorFeatureStep.$anonfun$executorContainerImage$1(BasicExecutorFeatureStep.scala:44)
        at scala.Option.getOrElse(Option.scala:189)
        at org.apache.spark.deploy.k8s.features.BasicExecutorFeatureStep.<init>(BasicExecutorFeatureStep.scala:44)
        at org.apache.spark.scheduler.cluster.k8s.KubernetesExecutorBuilder.buildFromFeatures(KubernetesExecutorBuilder.scala:69)
        at org.apache.spark.scheduler.cluster.k8s.ExecutorPodsAllocator.$anonfun$requestNewExecutors$1(ExecutorPodsAllocator.scala:398)
        at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:158)
        at org.apache.spark.scheduler.cluster.k8s.ExecutorPodsAllocator.requestNewExecutors(ExecutorPodsAllocator.scala:389)
        at org.apache.spark.scheduler.cluster.k8s.ExecutorPodsAllocator.$anonfun$onNewSnapshots$35(ExecutorPodsAllocator.scala:349)
        at org.apache.spark.scheduler.cluster.k8s.ExecutorPodsAllocator.$anonfun$onNewSnapshots$35$adapted(ExecutorPodsAllocator.scala:342)
        at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
        at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
        at org.apache.spark.scheduler.cluster.k8s.ExecutorPodsAllocator.onNewSnapshots(ExecutorPodsAllocator.scala:342)
        at org.apache.spark.scheduler.cluster.k8s.ExecutorPodsAllocator.$anonfun$start$3(ExecutorPodsAllocator.scala:120)
        at org.apache.spark.scheduler.cluster.k8s.ExecutorPodsAllocator.$anonfun$start$3$adapted(ExecutorPodsAllocator.scala:120)
        at org.apache.spark.scheduler.cluster.k8s.ExecutorPodsSnapshotsStoreImpl$SnapshotsSubscriber.org$apache$spark$scheduler$cluster$k8s$ExecutorPodsSnapshotsStoreImpl$SnapshotsSubscriber$$processSnapshotsInternal(ExecutorPodsSnapshotsStoreImpl.scala:138)
        at org.apache.spark.scheduler.cluster.k8s.ExecutorPodsSnapshotsStoreImpl$SnapshotsSubscriber$$anon$2.run(ExecutorPodsSnapshotsStoreImpl.scala:158)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:750)
23/03/29 15:05:47 WARN WatchConnectionManager: Exec Failure: HTTP 403, Status: 403 - Forbidden
23/03/29 15:05:47 WARN ExecutorPodsWatchSnapshotSource: Kubernetes client has been closed.
23/03/29 15:05:47 ERROR SparkContext: Error initializing SparkContext.
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://172.20.0.1/api/v1/namespaces/default/pods?labelSelector=spark-app-selector%3Dspark-de265d35dd144a23b24be9d979e75884%2Cspark-role%3Dexecutor&allowWatchBookmarks=true&watch=true. Message: Forbidden.
        at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:682)
        at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:661)
        at io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager.lambda$run$2(WatchConnectionManager.java:126)
        at java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:836)
        at java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:811)
        at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
        at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1990)
        at io.fabric8.kubernetes.client.okhttp.OkHttpWebSocketImpl$BuilderImpl$1.onFailure(OkHttpWebSocketImpl.java:66)
        at okhttp3.internal.ws.RealWebSocket.failWebSocket(RealWebSocket.java:571)
        at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:198)
        at okhttp3.RealCall$AsyncCall.execute(RealCall.java:203)
        at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:750)
        Suppressed: java.lang.Throwable: waiting here
                at io.fabric8.kubernetes.client.utils.Utils.waitUntilReady(Utils.java:169)
                at io.fabric8.kubernetes.client.utils.Utils.waitUntilReadyOrFail(Utils.java:180)
                at io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager.waitUntilReady(WatchConnectionManager.java:96)
                at io.fabric8.kubernetes.client.dsl.base.BaseOperation.watch(BaseOperation.java:572)
                at io.fabric8.kubernetes.client.dsl.base.BaseOperation.watch(BaseOperation.java:547)
                at io.fabric8.kubernetes.client.dsl.base.BaseOperation.watch(BaseOperation.java:83)
                at org.apache.spark.scheduler.cluster.k8s.ExecutorPodsWatchSnapshotSource.start(ExecutorPodsWatchSnapshotSource.scala:53)
                at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend.start(KubernetesClusterSchedulerBackend.scala:109)
                at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:222)
                at org.apache.spark.SparkContext.<init>(SparkContext.scala:595)
                at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2714)
                at org.apache.spark.sql.SparkSession$Builder.$anonfun$getOrCreate$2(SparkSession.scala:953)
                at scala.Option.getOrElse(Option.scala:189)
                at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:947)
                at org.apache.kyuubi.engine.spark.SparkSQLEngine$.createSpark(SparkSQLEngine.scala:253)
                at org.apache.kyuubi.engine.spark.SparkSQLEngine$.main(SparkSQLEngine.scala:326)
                at org.apache.kyuubi.engine.spark.SparkSQLEngine.main(SparkSQLEngine.scala)
                at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
                at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
                at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
                at java.lang.reflect.Method.invoke(Method.java:498)
                at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
                at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:958)
                at org.apache.spark.deploy.SparkSubmit$$anon$1.run(SparkSubmit.scala:165)
                at org.apache.spark.deploy.SparkSubmit$$anon$1.run(SparkSubmit.scala:163)
                at java.security.AccessController.doPrivileged(Native Method)
                at javax.security.auth.Subject.doAs(Subject.java:422)
                at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
                at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:163)
                at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
                at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
                at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1046)
                at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1055)
                at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
23/03/29 15:05:47 INFO SparkUI: Stopped Spark web UI at http://10.2.45.168:44019
23/03/29 15:05:47 INFO KubernetesClusterSchedulerBackend: Shutting down all executors
23/03/29 15:05:47 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asking each executor to shut down
23/03/29 15:05:47 ERROR Utils: Uncaught exception in thread main
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://172.20.0.1/api/v1/namespaces/default/services?labelSelector=spark-app-selector%3Dspark-de265d35dd144a23b24be9d979e75884. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. services is forbidden: User "system:serviceaccount:kyuubi:kyuubi" cannot list resource "services" in API group "" in the namespace "default".
        at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:682)
        at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:661)
        at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:610)
        at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:555)
        at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:518)
        at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:502)
        at io.fabric8.kubernetes.client.dsl.base.BaseOperation.listRequestHelper(BaseOperation.java:133)
        at io.fabric8.kubernetes.client.dsl.base.BaseOperation.list(BaseOperation.java:415)
        at io.fabric8.kubernetes.client.dsl.base.BaseOperation.list(BaseOperation.java:404)
        at io.fabric8.kubernetes.client.dsl.base.BaseOperation.deleteList(BaseOperation.java:537)
        at io.fabric8.kubernetes.client.dsl.base.BaseOperation.delete(BaseOperation.java:455)
        at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend.$anonfun$stop$5(KubernetesClusterSchedulerBackend.scala:139)
        at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1484)
        at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend.stop(KubernetesClusterSchedulerBackend.scala:140)
        at org.apache.spark.scheduler.TaskSchedulerImpl.stop(TaskSchedulerImpl.scala:931)
        at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:2785)
        at org.apache.spark.SparkContext.$anonfun$stop$11(SparkContext.scala:2105)
        at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1484)
        at org.apache.spark.SparkContext.stop(SparkContext.scala:2105)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:695)
        at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2714)
        at org.apache.spark.sql.SparkSession$Builder.$anonfun$getOrCreate$2(SparkSession.scala:953)
        at scala.Option.getOrElse(Option.scala:189)
        at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:947)
        at org.apache.kyuubi.engine.spark.SparkSQLEngine$.createSpark(SparkSQLEngine.scala:253)
        at org.apache.kyuubi.engine.spark.SparkSQLEngine$.main(SparkSQLEngine.scala:326)
        at org.apache.kyuubi.engine.spark.SparkSQLEngine.main(SparkSQLEngine.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
        at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:958)
        at org.apache.spark.deploy.SparkSubmit$$anon$1.run(SparkSubmit.scala:165)
        at org.apache.spark.deploy.SparkSubmit$$anon$1.run(SparkSubmit.scala:163)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
        at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:163)
        at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
        at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
        at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1046)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1055)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
23/03/29 15:05:47 ERROR Utils: Uncaught exception in thread main
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://172.20.0.1/api/v1/namespaces/default/pods?labelSelector=spark-app-selector%3Dspark-de265d35dd144a23b24be9d979e75884%2Cspark-role%3Dexecutor. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. pods is forbidden: User "system:serviceaccount:kyuubi:kyuubi" cannot list resource "pods" in API group "" in the namespace "default".
        at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:682)
        at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:661)
        at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:610)
        at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:555)
        at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:518)
        at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:502)
        at io.fabric8.kubernetes.client.dsl.base.BaseOperation.listRequestHelper(BaseOperation.java:133)
        at io.fabric8.kubernetes.client.dsl.base.BaseOperation.list(BaseOperation.java:415)
        at io.fabric8.kubernetes.client.dsl.base.BaseOperation.list(BaseOperation.java:404)
        at io.fabric8.kubernetes.client.dsl.base.BaseOperation.deleteList(BaseOperation.java:537)
        at io.fabric8.kubernetes.client.dsl.base.BaseOperation.delete(BaseOperation.java:455)
        at org.apache.spark.scheduler.cluster.k8s.ExecutorPodsAllocator.$anonfun$stop$1(ExecutorPodsAllocator.scala:477)
        at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1484)
        at org.apache.spark.scheduler.cluster.k8s.ExecutorPodsAllocator.stop(ExecutorPodsAllocator.scala:478)
        at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend.stop(KubernetesClusterSchedulerBackend.scala:155)
        at org.apache.spark.scheduler.TaskSchedulerImpl.stop(TaskSchedulerImpl.scala:931)
        at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:2785)
        at org.apache.spark.SparkContext.$anonfun$stop$11(SparkContext.scala:2105)
        at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1484)
        at org.apache.spark.SparkContext.stop(SparkContext.scala:2105)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:695)
        at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2714)
        at org.apache.spark.sql.SparkSession$Builder.$anonfun$getOrCreate$2(SparkSession.scala:953)
        at scala.Option.getOrElse(Option.scala:189)
        at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:947)
        at org.apache.kyuubi.engine.spark.SparkSQLEngine$.createSpark(SparkSQLEngine.scala:253)
        at org.apache.kyuubi.engine.spark.SparkSQLEngine$.main(SparkSQLEngine.scala:326)
        at org.apache.kyuubi.engine.spark.SparkSQLEngine.main(SparkSQLEngine.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
        at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:958)
        at org.apache.spark.deploy.SparkSubmit$$anon$1.run(SparkSubmit.scala:165)
        at org.apache.spark.deploy.SparkSubmit$$anon$1.run(SparkSubmit.scala:163)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
        at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:163)
        at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
        at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
        at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1046)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1055)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
23/03/29 15:05:47 ERROR Utils: Uncaught exception in thread main
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://172.20.0.1/api/v1/namespaces/default/configmaps?labelSelector=spark-app-selector%3Dspark-de265d35dd144a23b24be9d979e75884%2Cspark-role%3Dexecutor. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. configmaps is forbidden: User "system:serviceaccount:kyuubi:kyuubi" cannot list resource "configmaps" in API group "" in the namespace "default".
        at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:682)
        at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:661)
        at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:610)
        at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:555)
        at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:518)
        at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:502)
        at io.fabric8.kubernetes.client.dsl.base.BaseOperation.listRequestHelper(BaseOperation.java:133)
        at io.fabric8.kubernetes.client.dsl.base.BaseOperation.list(BaseOperation.java:415)
        at io.fabric8.kubernetes.client.dsl.base.BaseOperation.list(BaseOperation.java:404)
        at io.fabric8.kubernetes.client.dsl.base.BaseOperation.deleteList(BaseOperation.java:537)
        at io.fabric8.kubernetes.client.dsl.base.BaseOperation.delete(BaseOperation.java:455)
        at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend.$anonfun$stop$7(KubernetesClusterSchedulerBackend.scala:162)
        at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1484)
        at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend.stop(KubernetesClusterSchedulerBackend.scala:163)
        at org.apache.spark.scheduler.TaskSchedulerImpl.stop(TaskSchedulerImpl.scala:931)
        at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:2785)
        at org.apache.spark.SparkContext.$anonfun$stop$11(SparkContext.scala:2105)
        at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1484)
        at org.apache.spark.SparkContext.stop(SparkContext.scala:2105)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:695)
        at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2714)
        at org.apache.spark.sql.SparkSession$Builder.$anonfun$getOrCreate$2(SparkSession.scala:953)
        at scala.Option.getOrElse(Option.scala:189)
        at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:947)
        at org.apache.kyuubi.engine.spark.SparkSQLEngine$.createSpark(SparkSQLEngine.scala:253)
        at org.apache.kyuubi.engine.spark.SparkSQLEngine$.main(SparkSQLEngine.scala:326)
        at org.apache.kyuubi.engine.spark.SparkSQLEngine.main(SparkSQLEngine.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
        at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:958)
        at org.apache.spark.deploy.SparkSubmit$$anon$1.run(SparkSubmit.scala:165)
        at org.apache.spark.deploy.SparkSubmit$$anon$1.run(SparkSubmit.scala:163)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
        at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:163)
        at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
        at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
        at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1046)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1055)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
23/03/29 15:05:47 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
23/03/29 15:05:47 INFO MemoryStore: MemoryStore cleared
23/03/29 15:05:47 INFO BlockManager: BlockManager stopped
23/03/29 15:05:47 INFO BlockManagerMaster: BlockManagerMaster stopped
23/03/29 15:05:47 WARN MetricsSystem: Stopping a MetricsSystem that is not running
23/03/29 15:05:47 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
23/03/29 15:05:47 INFO SparkContext: Successfully stopped SparkContext
23/03/29 15:05:47 ERROR SparkSQLEngine: Failed to instantiate SparkSession: Failure executing: GET at: https://172.20.0.1/api/v1/namespaces/default/pods?labelSelector=spark-app-selector%3Dspark-de265d35dd144a23b24be9d979e75884%2Cspark-role%3Dexecutor&allowWatchBookmarks=true&watch=true. Message: Forbidden.
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://172.20.0.1/api/v1/namespaces/default/pods?labelSelector=spark-app-selector%3Dspark-de265d35dd144a23b24be9d979e75884%2Cspark-role%3Dexecutor&allowWatchBookmarks=true&watch=true. Message: Forbidden.
        at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:682)
        at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:661)
        at io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager.lambda$run$2(WatchConnectionManager.java:126)
        at java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:836)
        at java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:811)
        at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
        at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1990)
        at io.fabric8.kubernetes.client.okhttp.OkHttpWebSocketImpl$BuilderImpl$1.onFailure(OkHttpWebSocketImpl.java:66)
        at okhttp3.internal.ws.RealWebSocket.failWebSocket(RealWebSocket.java:571)
        at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:198)
        at okhttp3.RealCall$AsyncCall.execute(RealCall.java:203)
        at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:750)
        Suppressed: java.lang.Throwable: waiting here
                at io.fabric8.kubernetes.client.utils.Utils.waitUntilReady(Utils.java:169)
                at io.fabric8.kubernetes.client.utils.Utils.waitUntilReadyOrFail(Utils.java:180)
                at io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager.waitUntilReady(WatchConnectionManager.java:96)
                at io.fabric8.kubernetes.client.dsl.base.BaseOperation.watch(BaseOperation.java:572)
                at io.fabric8.kubernetes.client.dsl.base.BaseOperation.watch(BaseOperation.java:547)
                at io.fabric8.kubernetes.client.dsl.base.BaseOperation.watch(BaseOperation.java:83)
                at org.apache.spark.scheduler.cluster.k8s.ExecutorPodsWatchSnapshotSource.start(ExecutorPodsWatchSnapshotSource.scala:53)
                at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend.start(KubernetesClusterSchedulerBackend.scala:109)
                at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:222)
                at org.apache.spark.SparkContext.<init>(SparkContext.scala:595)
                at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2714)
                at org.apache.spark.sql.SparkSession$Builder.$anonfun$getOrCreate$2(SparkSession.scala:953)
                at scala.Option.getOrElse(Option.scala:189)
                at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:947)
                at org.apache.kyuubi.engine.spark.SparkSQLEngine$.createSpark(SparkSQLEngine.scala:253)
                at org.apache.kyuubi.engine.spark.SparkSQLEngine$.main(SparkSQLEngine.scala:326)
                at org.apache.kyuubi.engine.spark.SparkSQLEngine.main(SparkSQLEngine.scala)
                at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
                at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
                at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
                at java.lang.reflect.Method.invoke(Method.java:498)
                at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
                at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:958)
                at org.apache.spark.deploy.SparkSubmit$$anon$1.run(SparkSubmit.scala:165)
                at org.apache.spark.deploy.SparkSubmit$$anon$1.run(SparkSubmit.scala:163)
                at java.security.AccessController.doPrivileged(Native Method)
                at javax.security.auth.Subject.doAs(Subject.java:422)
                at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
                at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:163)
                at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
                at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
                at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1046)
                at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1055)
                at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
23/03/29 15:08:40 INFO ShutdownHookManager: Shutdown hook called
23/03/29 15:08:40 INFO ShutdownHookManager: Deleting directory /tmp/spark-5630ff13-cd25-4944-8c78-a3859e6cfb7e
23/03/29 15:08:40 INFO ShutdownHookManager: Deleting directory /tmp/spark-fa6a2d02-82a7-4965-b7a8-487957b15a08
tnk-dev commented 1 year ago

Also generally, since I am very new to Kyuubi, can you please point me to best practises how to set up user management?

Our vision as a data team:

Questions therefore: where and how do we create users and passwords? can you create user groups with permissions?

Initiated discussion

dnskr commented 1 year ago

23/03/29 15:05:47 ERROR Utils: Uncaught exception in thread main io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://172.20.0.1/api/v1/namespaces/default/services?labelSelector=spark-app-selector%3Dspark-de265d35dd144a23b24be9d979e75884. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. services is forbidden: User "system:serviceaccount:kyuubi:kyuubi" cannot list resource "services" in API group "" in the namespace "default". at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:682)

So, it says that ServiceAccount used by Spark cannot list services in the namespace. I think you need to specify ServiceAccount for Spark which can list services, i.e. it seems to be Spark config issue.

tnk-dev commented 1 year ago

Ok then I my naive question would be how to configure spark stuff in the helm values.yaml?

#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

# Default values for kyuubi.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

# Kyuubi server numbers
replicaCount: 2

image:
  repository: apache/kyuubi
  pullPolicy: Always
  tag: 1.7.0-all

imagePullSecrets: []

# ServiceAccount used for Kyuubi create/list/delete pod in kubernetes
serviceAccount:
  create: true
  name: ~

rbac:
  create: true
  rules:
    - apiGroups: [""]
      resources: ["pods"]
      verbs: ["create", "list", "delete"]

probe:
  liveness:
    enabled: true
    initialDelaySeconds: 30
    periodSeconds: 10
    timeoutSeconds: 2
    failureThreshold: 10
    successThreshold: 1
  readiness:
    enabled: true
    initialDelaySeconds: 30
    periodSeconds: 10
    timeoutSeconds: 2
    failureThreshold: 10
    successThreshold: 1

server:
  # Thrift Binary protocol (HiveServer2 compatible)
  thriftBinary:
    enabled: true
    port: 10009
    service:
      type: ClusterIP
      port: "{{ .Values.server.thriftBinary.port }}"
      nodePort: ~
      annotations: {}

  # Thrift HTTP protocol (HiveServer2 compatible)
  thriftHttp:
    enabled: false
    port: 10010
    service:
      type: ClusterIP
      port: "{{ .Values.server.thriftHttp.port }}"
      nodePort: ~
      annotations: {}

  # REST API protocol (experimental)
  rest:
    enabled: false
    port: 10099
    service:
      type: ClusterIP
      port: "{{ .Values.server.rest.port }}"
      nodePort: ~
      annotations: {}

  # MySQL compatible text protocol (experimental)
  mysql:
    enabled: false
    port: 3309
    service:
      type: ClusterIP
      port: "{{ .Values.server.mysql.port }}"
      nodePort: ~
      annotations: {}

kyuubiConfDir: /opt/kyuubi/conf
kyuubiConf:
  # The value (templated string) is used for kyuubi-env.sh file
  # See https://kyuubi.apache.org/docs/latest/deployment/settings.html#environments for more details
  kyuubiEnv: ~

  # The value (templated string) is used for kyuubi-defaults.conf file
  # See https://kyuubi.apache.org/docs/latest/deployment/settings.html#kyuubi-configurations for more details
  kyuubiDefaults: ~

  # The value (templated string) is used for log4j2.xml file
  # See https://kyuubi.apache.org/docs/latest/deployment/settings.html#logging for more details
  log4j2: ~

# Environment variables (templated)
env: []
envFrom: []

# Additional volumes for Kyuubi pod (templated)
volumes: []
# Additional volumeMounts for Kyuubi container (templated)
volumeMounts: []

# Additional init containers for Kyuubi pod (templated)
initContainers: []
# Additional containers for Kyuubi pod (templated)
containers: []

resources: {}
  # Used to specify resource, default unlimited.
  # If you do want to specify resources:
  #   1. remove the curly braces after 'resources:'
  #   2. uncomment the following lines
  # limits:
  #   cpu: 4
  #   memory: 10Gi
  # requests:
  #   cpu: 2
  #   memory: 4Gi

# Constrain Kyuubi server pods to specific nodes
nodeSelector: {}
tolerations: []
affinity: {}

securityContext: {}
tnk-dev commented 1 year ago

I'm tryna make a simple MVP POC with Kyuubi and Helm.

pan3793 commented 1 year ago

@dnskr looks like we need to provide the spark stuff in the helm charts, actually, we did such a thing internally, is it a good practice?

dnskr commented 1 year ago

@dnskr looks like we need to provide the spark stuff in the helm charts, actually, we did such a thing internally, is it a good practice?

I think it is very good idea. It might be a bit complicated to create basic and flexible configuration, but I would happy to try. Should we create dedicated issue to discuss it? Also it would be great if you can share configuration properties you use for Spark to find best minimal config.

pan3793 commented 1 year ago

Also it would be great if you can share configuration properties you use for Spark to find best minimal config.

We maintain an internal Spark w/ additional K8s enhancement patches, e.g. external log service integration, window-based executor failure detection, Spark UI ingress exposing, etc. except for that, there are a few different configurations compared to Spark on Yarn.

There are some configurations I think are important.

In Yarn, usually we set it to 0 since many drivers may share the same NodeManager, but K8s disallow listening on the 0 port, and each Driver has its own Pod IP

spark.ui.port=4040

Enable Prometheus metrics

spark.metrics.conf.*.sink.prometheusServlet.class=org.apache.spark.metrics.sink.PrometheusServlet
spark.metrics.conf.*.sink.prometheusServlet.path=/metrics/prometheus

Force terminate Driver Pod when encountering OOM in case Driver Pod hangs forever.

spark.driver.extraJavaOptions=-XX:OnOutOfMemoryError="kill -9 %p"

Based on our internal testing, since Spark 3.2, the zstd achieves the same perf and reduces ~50% disk usage for shuffle data compared w/ the default lz4

spark.io.compression.codec=zstd

We disable the Netty direct memory usage totally, to skip the Executor memory check on starting, see https://github.com/apache/spark/pull/38901 for details.

spark.network.io.preferDirectBufs=false
spark.shuffle.io.preferDirectBufs=false
tnk-dev commented 1 year ago

@dnskr

Also it would be great if you can share configuration properties you use for Spark to find best minimal config.

For Delta Access on S3 I need the following:

spark.sql.extensions                         io.delta.sql.DeltaSparkSessionExtension
spark.sql.catalog.spark_catalog              org.apache.spark.sql.delta.catalog.DeltaCatalog
spark.hadoop.fs.s3a.aws.credentials.provider org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider
spark.hadoop.fs.s3a.impl                     org.apache.hadoop.fs.s3a.S3AFileSystem
spark.hadoop.fs.s3a.access.key               ***
spark.hadoop.fs.s3a.secret.key               ***
spark.hadoop.fs.s3a.path.style.access        true
tnk-dev commented 1 year ago

Also you would need a hand full of jar files for spark delta and AWS to work, assuming the respective matching spark version in $SPARK_HOME/jars:

e.g. for Spark 3.3.2 what worked for me locally:

tnk-dev commented 1 year ago

Also I would love to see an LDAP helm implementation out of the box that works seamlessly with Kyuubi! (Bateries included ;) ) Since I (Data Engineer) have no knowledge about authentication and authorization systems.

jasonchrion commented 1 year ago

as you can see in the logs output, you must specify the spark docker image, and also the serviceaccount kyuubi in namespace kyuubi hasn't pod permission in namespace default as k8s role mechanism. as a best practices, you should also specify a user.

you can redeploy kyuubi with rbac.roles.resources value ["pods", "configmaps", "services"], rbac.roles.verbs value ["*"] or exact value ["create", "list", "delete", "get", "watch"] in values.yaml, and then start beeline like this:

bin/beeline -u 'jdbc:hive2://kyuubi-thrift-binary:10009/?spark.kubernetes.namespace=kyuubi;spark.kubernetes.container.image=apache/spark:v3.3.2' -n root

if everything ok, you could see 2 spark pod running.

As a NOT recommended way, you can also change the role and rolebinding to clusterrole and clusterrolebinding.

please see spark k8s configuration as a reference.

@tnk-dev