apache / kyuubi

Apache Kyuubi is a distributed and multi-tenant gateway to provide serverless SQL on data warehouses and lakehouses.
https://kyuubi.apache.org/
Apache License 2.0
2.1k stars 913 forks source link

[Bug] 1.6.1 doesn't work with Spark on k8s #4203

Closed hanna-liashchuk closed 1 year ago

hanna-liashchuk commented 1 year ago

Code of Conduct

Search before asking

Describe the bug

I'm testing 1.6.1-incubating release on Kubernetes and I found out that it is completely broken. Executors are starting and failing after a couple of seconds.

Executor log is below:

++ id -u
+ myuid=185
++ id -g
+ mygid=1000
+ set +e
++ getent passwd 185
+ uidentry=spark:x:185:1000::/home/spark:/bin/sh
+ set -e
+ '[' -z spark:x:185:1000::/home/spark:/bin/sh ']'
+ '[' -z /usr/java/default ']'
+ SPARK_CLASSPATH=':/opt/spark/jars/*'
+ env
+ grep SPARK_JAVA_OPT_
+ sort -t_ -k4 -n
+ sed 's/[^=]*=\(.*\)/\1/g'
+ readarray -t SPARK_EXECUTOR_JAVA_OPTS
+ '[' -n '' ']'
+ '[' -z ']'
+ '[' -z ']'
+ '[' -n '' ']'
+ '[' -z ']'
+ '[' -z x ']'
+ SPARK_CLASSPATH='/opt/spark/conf::/opt/spark/jars/*'
+ case "$1" in
+ shift 1
+ CMD=(${JAVA_HOME}/bin/java "${SPARK_EXECUTOR_JAVA_OPTS[@]}" -Xms$SPARK_EXECUTOR_MEMORY -Xmx$SPARK_EXECUTOR_MEMORY -cp "$SPARK_CLASSPATH:$SPARK_DIST_CLASSPATH" org.apache.spark.scheduler.cluster.k8s.KubernetesExecutorBackend --driver-url $SPARK_DRIVER_URL --executor-id $SPARK_EXECUTOR_ID --cores $SPARK_EXECUTOR_CORES --app-id $SPARK_APPLICATION_ID --hostname $SPARK_EXECUTOR_POD_IP --resourceProfileId $SPARK_RESOURCE_PROFILE_ID --podName $SPARK_EXECUTOR_POD_NAME)

+ exec /usr/bin/tini -s -- /usr/java/default/bin/java -Dspark.driver.port=38081 -Dspark.kyuubi.metrics.prometheus.port=10019 -Dspark.ui.port=0 -Xms1024m -Xmx1024m -cp '/opt/spark/conf::/opt/spark/jars/*:' org.apache.spark.scheduler.cluster.k8s.KubernetesExecutorBackend --driver-url spark://CoarseGrainedScheduler@172.17.16.142:38081 --executor-id 45 --cores 3 --app-id spark-application-1674731059727 --hostname 172.17.21.78 --resourceProfileId 0 --podName
Unrecognized options: --podName

Usage: org.apache.spark.scheduler.cluster.k8s.KubernetesExecutorBackend [options]

 Options are:
   --driver-url <driverUrl>
   --executor-id <executorId>
   --bind-address <bindAddress>
   --hostname <hostname>
   --cores <cores>
   --resourcesFile <fileWithJSONResourceInformation>
   --app-id <appid>
   --worker-url <workerUrl>
   --resourceProfileId <id>
   --podName <podName>

Lats version that worked was 1.6.0-incubating, the SPARK_EXECUTOR_POD_NAME var was set with the value from spark.kubernetes.executor.podNamePrefix. NB: 1.6.0 doesn't work without this podNamePrefix parameter as the name generated by the Kyuubi server doesn't conform to the k8s naming conventions. 1.6.1 doesn't work even with this parameter.

Affects Version(s)

1.6.1

Kyuubi Server Log Output

++ id -u
+ myuid=10009
++ id -g
+ mygid=10009
+ set +e
++ getent passwd 10009
+ uidentry=kyuubi:x:10009:10009::/home/kyuubi:/bin/sh
+ set -e
+ '[' -z kyuubi:x:10009:10009::/home/kyuubi:/bin/sh ']'
+ SPARK_CLASSPATH=':/opt/spark/jars/*'
+ env
+ grep SPARK_JAVA_OPT_
+ sort -t_ -k4 -n
+ sed 's/[^=]*=\(.*\)/\1/g'
+ readarray -t SPARK_EXECUTOR_JAVA_OPTS
+ '[' -n '' ']'
+ '[' -z ']'
+ '[' -z ']'
+ '[' -n '' ']'
+ '[' -z ']'
+ '[' -z ']'
+ '[' -z x ']'
+ SPARK_CLASSPATH='/opt/spark/conf::/opt/spark/jars/*'
+ case "$1" in
+ echo 'Non-spark-on-k8s command provided, proceeding in pass-through mode...'
Non-spark-on-k8s command provided, proceeding in pass-through mode...
+ CMD=("$@")
+ exec /usr/bin/tini -s -- kyuubi run
Warn: Not find kyuubi environment file /opt/kyuubi/conf/kyuubi-env.sh, using default ones...
JAVA_HOME: /usr/java/default
KYUUBI_HOME: /opt/kyuubi
KYUUBI_CONF_DIR: /opt/kyuubi/conf
KYUUBI_LOG_DIR: /opt/kyuubi/logs
KYUUBI_PID_DIR: /opt/kyuubi/pid
KYUUBI_WORK_DIR_ROOT: /opt/kyuubi/work
FLINK_HOME: 
FLINK_ENGINE_HOME: /opt/kyuubi/externals/engines/flink
SPARK_HOME: /opt/spark
SPARK_CONF_DIR: /opt/spark/conf
SPARK_ENGINE_HOME: /opt/kyuubi/externals/engines/spark
TRINO_ENGINE_HOME: /opt/kyuubi/externals/engines/trino
HIVE_ENGINE_HOME: /opt/kyuubi/externals/engines/hive
HADOOP_CONF_DIR: 
YARN_CONF_DIR: 
Starting org.apache.kyuubi.server.KyuubiServer
2023-02-01 18:30:54,534 main ERROR Filters contains invalid attributes "onMatch", "onMismatch"
2023-02-01 18:30:54.646 INFO org.apache.kyuubi.server.KyuubiServer: 
                  Welcome to
  __  __                           __
 /\ \/\ \                         /\ \      __
 \ \ \/'/'  __  __  __  __  __  __\ \ \____/\_\
  \ \ , <  /\ \/\ \/\ \/\ \/\ \/\ \\ \ '__`\/\ \
   \ \ \\`\\ \ \_\ \ \ \_\ \ \ \_\ \\ \ \L\ \ \ \
    \ \_\ \_\/`____ \ \____/\ \____/ \ \_,__/\ \_\
     \/_/\/_/`/___/> \/___/  \/___/   \/___/  \/_/
                /\___/
                \/__/

2023-02-01 18:30:54.652 INFO org.apache.kyuubi.server.KyuubiServer: Version: 1.6.1-incubating, Revision: 7524d817828eb61ce43c668dac7b4890fe96d339, Branch: HEAD, Java: 1.8, Scala: 2.12, Spark: 3.2.2, Hadoop: 3.3.4, Hive: 3.1.3, Flink: 1.15.1, Trino: 363
2023-02-01 18:30:54.656 INFO org.apache.kyuubi.server.KyuubiServer: Using Scala version 2.12.15, OpenJDK 64-Bit Server VM, 1.8.0_342
2023-02-01 18:30:54.663 INFO org.apache.kyuubi.util.SignalRegister: Registering signal handler for TERM
2023-02-01 18:30:54.664 INFO org.apache.kyuubi.util.SignalRegister: Registering signal handler for HUP
2023-02-01 18:30:54.665 INFO org.apache.kyuubi.util.SignalRegister: Registering signal handler for INT
2023-02-01 18:30:54.903 INFO org.apache.kyuubi.Utils: Loading Kyuubi properties from /opt/kyuubi/conf/kyuubi-defaults.conf
2023-02-01 18:30:55.068 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2023-02-01 18:30:55.175 INFO org.apache.kyuubi.server.KinitAuxiliaryService: Service[KinitAuxiliaryService] is initialized.
2023-02-01 18:30:55.324 INFO org.eclipse.jetty.util.log: Logging initialized @1462ms to org.eclipse.jetty.util.log.Slf4jLog
2023-02-01 18:30:55.398 INFO org.apache.kyuubi.metrics.PrometheusReporterService: Service[PrometheusReporterService] is initialized.
2023-02-01 18:30:55.400 INFO org.apache.kyuubi.metrics.JsonReporterService: Service[JsonReporterService] is initialized.
2023-02-01 18:30:55.400 INFO org.apache.kyuubi.metrics.MetricsSystem: Service[MetricsSystem] is initialized.
2023-02-01 18:30:55.405 INFO org.apache.kyuubi.util.ThreadUtils: KyuubiSessionManager-exec-pool: pool size: 100, wait queue size: 100, thread keepalive time: 60000 ms
2023-02-01 18:30:55.453 INFO org.apache.hadoop.yarn.client.DefaultNoHARMFailoverProxyProvider: Connecting to ResourceManager at /0.0.0.0:8032
2023-02-01 18:30:55.579 INFO org.apache.kyuubi.engine.YarnApplicationOperation: Successfully initialized yarn client: STARTED
2023-02-01 18:30:55.692 INFO org.apache.kyuubi.engine.KubernetesApplicationOperation: Start Initialize Kubernetes Client.
2023-02-01 18:30:55.692 WARN org.apache.kyuubi.engine.KubernetesApplicationOperation: Skip Initialize Kubernetes Client, because of Context not set.
2023-02-01 18:30:55.693 INFO org.apache.kyuubi.engine.KyuubiApplicationManager: Service[KyuubiApplicationManager] is initialized.
2023-02-01 18:30:55.781 WARN org.apache.kyuubi.credentials.HadoopCredentialsManager: Service hadoopfs does not require a token. Check your configuration to see if security is disabled or not. If security is enabled, some configurations of hadoopfs  might be missing, please check the configurations in  https://kyuubi.readthedocs.io/en/latest/security/hadoop_credentials_manager.html#required-security-configs
2023-02-01 18:30:55.783 INFO org.apache.hadoop.hive.conf.HiveConf: Found configuration file null
2023-02-01 18:30:55.950 WARN org.apache.kyuubi.credentials.HadoopCredentialsManager: Service hive does not require a token. Check your configuration to see if security is disabled or not. If security is enabled, some configurations of hive  might be missing, please check the configurations in  https://kyuubi.readthedocs.io/en/latest/security/hadoop_credentials_manager.html#required-security-configs
2023-02-01 18:30:55.953 WARN org.apache.kyuubi.credentials.HadoopCredentialsManager: No delegation token is required by services.
2023-02-01 18:30:55.953 INFO org.apache.kyuubi.credentials.HadoopCredentialsManager: Service[HadoopCredentialsManager] is initialized.
2023-02-01 18:30:55.958 INFO org.apache.kyuubi.operation.KyuubiOperationManager: Service[KyuubiOperationManager] is initialized.
2023-02-01 18:30:55.959 INFO org.apache.kyuubi.session.KyuubiSessionManager: Service[KyuubiSessionManager] is initialized.
2023-02-01 18:30:55.959 INFO org.apache.kyuubi.server.KyuubiServer: Service[KyuubiBackendService] is initialized.
2023-02-01 18:30:55.996 INFO org.apache.kyuubi.server.KyuubiTBinaryFrontendService: Initializing KyuubiTBinaryFrontend on kyuubi-server-0.kyuubi-hs.kyuubi-dev.svc.cluster.local:10009 with [9, 999] worker threads
2023-02-01 18:30:56.081 INFO org.apache.curator.framework.imps.CuratorFrameworkImpl: Starting
2023-02-01 18:30:56.088 INFO org.apache.zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.14-4c25d480e66aadd371de8bd2fd8da255ac140bcf, built on 03/06/2019 16:18 GMT
2023-02-01 18:30:56.088 INFO org.apache.zookeeper.ZooKeeper: Client environment:host.name=kyuubi-server-0.kyuubi-hs.kyuubi-dev.svc.cluster.local
2023-02-01 18:30:56.088 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.version=1.8.0_342
2023-02-01 18:30:56.088 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.vendor=Azul Systems, Inc.
2023-02-01 18:30:56.088 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.home=/usr/lib/jvm/zulu8-ca-amd64/jre
2023-02-01 18:30:56.088 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.class.path=/opt/kyuubi/jars/slf4j-api-1.7.35.jar:/opt/kyuubi/jars/swagger-integration-2.2.1.jar:/opt/kyuubi/jars/metrics-jmx-4.2.8.jar:/opt/kyuubi/jars/hive-common-3.1.3.jar:/opt/kyuubi/jars/swagger-core-2.2.1.jar:/opt/kyuubi/jars/jakarta.ws.rs-api-2.1.6.jar:/opt/kyuubi/jars/kyuubi-metrics_2.12-1.6.1-incubating.jar:/opt/kyuubi/jars/HikariCP-4.0.3.jar:/opt/kyuubi/jars/kubernetes-model-flowcontrol-5.12.1.jar:/opt/kyuubi/jars/automaton-1.11-8.jar:/opt/kyuubi/jars/netty-codec-4.1.73.Final.jar:/opt/kyuubi/jars/netty-common-4.1.73.Final.jar:/opt/kyuubi/jars/j2objc-annotations-1.3.jar:/opt/kyuubi/jars/hive-shims-0.23-3.1.3.jar:/opt/kyuubi/jars/kubernetes-client-5.12.1.jar:/opt/kyuubi/jars/kubernetes-model-node-5.12.1.jar:/opt/kyuubi/jars/jakarta.validation-api-2.0.2.jar:/opt/kyuubi/jars/failsafe-2.4.4.jar:/opt/kyuubi/jars/kyuubi-server_2.12-1.6.1-incubating.jar:/opt/kyuubi/jars/kubernetes-model-admissionregistration-5.12.1.jar:/opt/kyuubi/jars/simpleclient_common-0.16.0.jar:/opt/kyuubi/jars/kubernetes-model-apiextensions-5.12.1.jar:/opt/kyuubi/jars/classgraph-4.8.138.jar:/opt/kyuubi/jars/kubernetes-model-networking-5.12.1.jar:/opt/kyuubi/jars/jackson-jaxrs-json-provider-2.13.3.jar:/opt/kyuubi/jars/grpc-grpclb-1.48.0.jar:/opt/kyuubi/jars/zjsonpatch-0.3.0.jar:/opt/kyuubi/jars/jersey-server-2.36.jar:/opt/kyuubi/jars/jackson-databind-2.13.3.jar:/opt/kyuubi/jars/jackson-dataformat-yaml-2.13.3.jar:/opt/kyuubi/jars/curator-recipes-2.12.0.jar:/opt/kyuubi/jars/osgi-resource-locator-1.0.3.jar:/opt/kyuubi/jars/jcl-over-slf4j-1.7.35.jar:/opt/kyuubi/jars/hive-standalone-metastore-3.1.3.jar:/opt/kyuubi/jars/jersey-entity-filtering-2.36.jar:/opt/kyuubi/jars/aopalliance-repackaged-2.6.1.jar:/opt/kyuubi/jars/okhttp-3.12.12.jar:/opt/kyuubi/jars/simpleclient_dropwizard-0.16.0.jar:/opt/kyuubi/jars/kubernetes-model-extensions-5.12.1.jar:/opt/kyuubi/jars/hk2-utils-2.6.1.jar:/opt/kyuubi/jars/grpc-protobuf-lite-1.48.0.jar:/opt/kyuubi/jars/kyuubi-zookeeper_2.12-1.6.1-incubating.jar:/opt/kyuubi/jars/kyuubi-events_2.12-1.6.1-incubating.jar:/opt/kyuubi/jars/grpc-stub-1.48.0.jar:/opt/kyuubi/jars/kubernetes-model-apps-5.12.1.jar:/opt/kyuubi/jars/grpc-api-1.48.0.jar:/opt/kyuubi/jars/metrics-core-4.2.8.jar:/opt/kyuubi/jars/httpcore-4.4.15.jar:/opt/kyuubi/jars/vertx-grpc-4.3.2.jar:/opt/kyuubi/jars/hive-serde-3.1.3.jar:/opt/kyuubi/jars/metrics-json-4.2.8.jar:/opt/kyuubi/jars/log4j-slf4j-impl-2.18.0.jar:/opt/kyuubi/jars/kyuubi-ha_2.12-1.6.1-incubating.jar:/opt/kyuubi/jars/jackson-annotations-2.13.3.jar:/opt/kyuubi/jars/hive-shims-common-3.1.3.jar:/opt/kyuubi/jars/log4j-api-2.18.0.jar:/opt/kyuubi/jars/simpleclient_tracer_otel-0.16.0.jar:/opt/kyuubi/jars/commons-collections-3.2.2.jar:/opt/kyuubi/jars/jetty-http-9.4.48.v20220622.jar:/opt/kyuubi/jars/jersey-common-2.36.jar:/opt/kyuubi/jars/protobuf-java-3.21.1.jar:/opt/kyuubi/jars/kubernetes-model-core-5.12.1.jar:/opt/kyuubi/jars/logging-interceptor-3.12.12.jar:/opt/kyuubi/jars/kubernetes-model-scheduling-5.12.1.jar:/opt/kyuubi/jars/jakarta.inject-2.6.1.jar:/opt/kyuubi/jars/libfb303-0.9.3.jar:/opt/kyuubi/jars/netty-transport-native-epoll-4.1.73.Final-linux-aarch_64.jar:/opt/kyuubi/jars/metrics-jvm-4.2.8.jar:/opt/kyuubi/jars/kubernetes-model-certificates-5.12.1.jar:/opt/kyuubi/jars/paranamer-2.8.jar:/opt/kyuubi/jars/hk2-locator-2.6.1.jar:/opt/kyuubi/jars/scala-library-2.12.15.jar:/opt/kyuubi/jars/kubernetes-model-rbac-5.12.1.jar:/opt/kyuubi/jars/hk2-api-2.6.1.jar:/opt/kyuubi/jars/jakarta.xml.bind-api-2.3.2.jar:/opt/kyuubi/jars/netty-transport-native-unix-common-4.1.73.Final.jar:/opt/kyuubi/jars/kubernetes-model-common-5.12.1.jar:/opt/kyuubi/jars/jetty-server-9.4.48.v20220622.jar:/opt/kyuubi/jars/hive-service-rpc-3.1.3.jar:/opt/kyuubi/jars/jackson-module-jaxb-annotations-2.13.3.jar:/opt/kyuubi/jars/httpclient-4.5.13.jar:/opt/kyuubi/jars/kyuubi-rest-client-1.6.1-incubating.jar:/opt/kyuubi/jars/kubernetes-model-autoscaling-5.12.1.jar:/opt/kyuubi/jars/jakarta.annotation-api-1.3.5.jar:/opt/kyuubi/jars/zookeeper-3.4.14.jar:/opt/kyuubi/jars/netty-resolver-4.1.73.Final.jar:/opt/kyuubi/jars/netty-handler-proxy-4.1.73.Final.jar:/opt/kyuubi/jars/jetcd-grpc-0.7.3.jar:/opt/kyuubi/jars/jackson-datatype-jsr310-2.13.3.jar:/opt/kyuubi/jars/jul-to-slf4j-1.7.35.jar:/opt/kyuubi/jars/jackson-jaxrs-base-2.13.3.jar:/opt/kyuubi/jars/kubernetes-model-coordination-5.12.1.jar:/opt/kyuubi/jars/swagger-models-2.2.1.jar:/opt/kyuubi/jars/javassist-3.25.0-GA.jar:/opt/kyuubi/jars/netty-tcnative-classes-2.0.46.Final.jar:/opt/kyuubi/jars/kyuubi-ctl_2.12-1.6.1-incubating.jar:/opt/kyuubi/jars/jersey-hk2-2.36.jar:/opt/kyuubi/jars/fliptables-1.0.2.jar:/opt/kyuubi/jars/swagger-jaxrs2-2.2.1.jar:/opt/kyuubi/jars/error_prone_annotations-2.14.0.jar:/opt/kyuubi/jars/commons-lang3-3.10.jar:/opt/kyuubi/jars/netty-handler-4.1.73.Final.jar:/opt/kyuubi/jars/grpc-protobuf-1.48.0.jar:/opt/kyuubi/jars/log4j-core-2.18.0.jar:/opt/kyuubi/jars/kubernetes-model-discovery-5.12.1.jar:/opt/kyuubi/jars/snakeyaml-1.30.jar:/opt/kyuubi/jars/jackson-module-scala_2.12-2.13.3.jar:/opt/kyuubi/jars/jetty-util-9.4.48.v20220622.jar:/opt/kyuubi/jars/hive-storage-api-2.7.0.jar:/opt/kyuubi/jars/simpleclient_servlet-0.16.0.jar:/opt/kyuubi/jars/protobuf-java-util-3.21.1.jar:/opt/kyuubi/jars/jersey-media-json-jackson-2.36.jar:/opt/kyuubi/jars/jetty-security-9.4.48.v20220622.jar:/opt/kyuubi/jars/perfmark-api-0.25.0.jar:/opt/kyuubi/jars/hive-metastore-3.1.3.jar:/opt/kyuubi/jars/libthrift-0.9.3.jar:/opt/kyuubi/jars/netty-buffer-4.1.73.Final.jar:/opt/kyuubi/jars/commons-logging-1.1.3.jar:/opt/kyuubi/jars/vertx-core-4.3.2.jar:/opt/kyuubi/jars/jetty-util-ajax-9.4.48.v20220622.jar:/opt/kyuubi/jars/kubernetes-model-events-5.12.1.jar:/opt/kyuubi/jars/okio-1.15.0.jar:/opt/kyuubi/jars/netty-transport-4.1.73.Final.jar:/opt/kyuubi/jars/jline-0.9.94.jar:/opt/kyuubi/jars/netty-transport-classes-epoll-4.1.73.Final.jar:/opt/kyuubi/jars/commons-lang-2.6.jar:/opt/kyuubi/jars/kyuubi-spark-connector-tpcds_2.12-1.6.1-incubating.jar:/opt/kyuubi/jars/generex-1.0.2.jar:/opt/kyuubi/jars/guava-30.1-jre.jar:/opt/kyuubi/jars/swagger-ui-4.9.1.jar:/opt/kyuubi/jars/log4j-1.2-api-2.18.0.jar:/opt/kyuubi/jars/curator-framework-2.12.0.jar:/opt/kyuubi/jars/jersey-container-servlet-core-2.36.jar:/opt/kyuubi/jars/jakarta.servlet-api-4.0.4.jar:/opt/kyuubi/jars/netty-codec-http2-4.1.73.Final.jar:/opt/kyuubi/jars/proto-google-common-protos-2.9.0.jar:/opt/kyuubi/jars/netty-codec-dns-4.1.73.Final.jar:/opt/kyuubi/jars/annotations-4.1.1.4.jar:/opt/kyuubi/jars/jetty-io-9.4.48.v20220622.jar:/opt/kyuubi/jars/gson-2.8.9.jar:/opt/kyuubi/jars/jetcd-common-0.7.3.jar:/opt/kyuubi/jars/kyuubi-common_2.12-1.6.1-incubating.jar:/opt/kyuubi/jars/grpc-netty-1.48.0.jar:/opt/kyuubi/jars/simpleclient_tracer_common-0.16.0.jar:/opt/kyuubi/jars/scopt_2.12-4.1.0.jar:/opt/kyuubi/jars/grpc-core-1.48.0.jar:/opt/kyuubi/jars/kyuubi-server-plugin-1.6.1-incubating.jar:/opt/kyuubi/jars/curator-client-2.12.0.jar:/opt/kyuubi/jars/commons-codec-1.15.jar:/opt/kyuubi/jars/netty-transport-native-epoll-4.1.73.Final-linux-x86_64.jar:/opt/kyuubi/jars/kubernetes-model-policy-5.12.1.jar:/opt/kyuubi/jars/netty-all-4.1.73.Final.jar:/opt/kyuubi/jars/jakarta.activation-api-1.2.1.jar:/opt/kyuubi/jars/netty-codec-socks-4.1.73.Final.jar:/opt/kyuubi/jars/animal-sniffer-annotations-1.21.jar:/opt/kyuubi/jars/netty-resolver-dns-4.1.73.Final.jar:/opt/kyuubi/jars/jetcd-api-0.7.3.jar:/opt/kyuubi/jars/netty-codec-http-4.1.73.Final.jar:/opt/kyuubi/jars/jersey-client-2.36.jar:/opt/kyuubi/jars/kubernetes-model-metrics-5.12.1.jar:/opt/kyuubi/jars/simpleclient_tracer_otel_agent-0.16.0.jar:/opt/kyuubi/jars/hadoop-client-runtime-3.3.4.jar:/opt/kyuubi/jars/jetty-servlet-9.4.48.v20220622.jar:/opt/kyuubi/jars/kubernetes-model-batch-5.12.1.jar:/opt/kyuubi/jars/derby-10.14.2.0.jar:/opt/kyuubi/jars/jetcd-core-0.7.3.jar:/opt/kyuubi/jars/swagger-annotations-2.2.1.jar:/opt/kyuubi/jars/kubernetes-model-storageclass-5.12.1.jar:/opt/kyuubi/jars/simpleclient_servlet_common-0.16.0.jar:/opt/kyuubi/jars/grpc-context-1.48.0.jar:/opt/kyuubi/jars/hadoop-client-api-3.3.4.jar:/opt/kyuubi/jars/simpleclient-0.16.0.jar:/opt/kyuubi/jars/jackson-core-2.13.3.jar:/opt/kyuubi/jars/custom-ldap-auth_2.12-0.1.0-SNAPSHOT.jar:/opt/kyuubi/conf:
2023-02-01 18:30:56.089 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2023-02-01 18:30:56.089 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
2023-02-01 18:30:56.089 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
2023-02-01 18:30:56.089 INFO org.apache.zookeeper.ZooKeeper: Client environment:os.name=Linux
2023-02-01 18:30:56.089 INFO org.apache.zookeeper.ZooKeeper: Client environment:os.arch=amd64
2023-02-01 18:30:56.089 INFO org.apache.zookeeper.ZooKeeper: Client environment:os.version=4.18.0-372.26.1.el8_6.x86_64
2023-02-01 18:30:56.089 INFO org.apache.zookeeper.ZooKeeper: Client environment:user.name=kyuubi
2023-02-01 18:30:56.089 INFO org.apache.zookeeper.ZooKeeper: Client environment:user.home=/home/kyuubi
2023-02-01 18:30:56.089 INFO org.apache.zookeeper.ZooKeeper: Client environment:user.dir=/opt/kyuubi
2023-02-01 18:30:56.090 INFO org.apache.zookeeper.ZooKeeper: Initiating client connection, connectString=zookeeper-connect.kafka.svc:2181 sessionTimeout=60000 watcher=org.apache.curator.ConnectionState@31fc71ab
2023-02-01 18:30:56.106 INFO org.apache.kyuubi.ha.client.KyuubiServiceDiscovery: Service[KyuubiServiceDiscovery] is initialized.
2023-02-01 18:30:56.107 INFO org.apache.kyuubi.server.KyuubiTBinaryFrontendService: Service[KyuubiTBinaryFrontend] is initialized.
2023-02-01 18:30:56.108 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server zookeeper-connect.kafka.svc/10.10.179.151:2181. Will not attempt to authenticate using SASL (unknown error)
2023-02-01 18:30:56.110 INFO org.apache.kyuubi.server.KyuubiServer: Service[KyuubiServer] is initialized.
2023-02-01 18:30:56.112 INFO org.apache.kyuubi.server.KinitAuxiliaryService: Service[KinitAuxiliaryService] is started.
2023-02-01 18:30:56.114 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to zookeeper-connect.kafka.svc/10.10.179.151:2181, initiating session
2023-02-01 18:30:56.116 INFO org.eclipse.jetty.server.Server: jetty-9.4.48.v20220622; built: 2022-06-21T20:42:25.880Z; git: 6b67c5719d1f4371b33655ff2d047d24e171e49a; jvm 1.8.0_342-b07
2023-02-01 18:30:56.118 INFO org.apache.zookeeper.ClientCnxn: Session establishment complete on server zookeeper-connect.kafka.svc/10.10.179.151:2181, sessionid = 0x100f5d5f47b52b3, negotiated timeout = 40000
2023-02-01 18:30:56.126 INFO org.apache.curator.framework.state.ConnectionStateManager: State change: CONNECTED
2023-02-01 18:30:56.128 INFO org.apache.kyuubi.ha.client.zookeeper.ZookeeperDiscoveryClient: Zookeeper client connection state changed to: CONNECTED
2023-02-01 18:30:56.154 INFO org.eclipse.jetty.server.handler.ContextHandler: Started o.e.j.s.ServletContextHandler@24fb6a80{/,null,AVAILABLE}
2023-02-01 18:30:56.184 INFO org.eclipse.jetty.server.AbstractConnector: Started ServerConnector@6f603e89{HTTP/1.1, (http/1.1)}{0.0.0.0:10019}
2023-02-01 18:30:56.185 INFO org.eclipse.jetty.server.Server: Started @2322ms
2023-02-01 18:30:56.185 INFO org.apache.kyuubi.metrics.PrometheusReporterService: Prometheus metrics HTTP server has started at http://172.17.16.175:10019/.
2023-02-01 18:30:56.185 INFO org.apache.kyuubi.metrics.PrometheusReporterService: Service[PrometheusReporterService] is started.
2023-02-01 18:30:56.186 INFO org.apache.kyuubi.metrics.JsonReporterService: Service[JsonReporterService] is started.
2023-02-01 18:30:56.186 INFO org.apache.kyuubi.metrics.MetricsSystem: Service[MetricsSystem] is started.
2023-02-01 18:30:56.189 INFO org.apache.kyuubi.engine.KyuubiApplicationManager: Service[KyuubiApplicationManager] is started.
2023-02-01 18:30:56.190 INFO org.apache.kyuubi.credentials.HadoopCredentialsManager: Service[HadoopCredentialsManager] is started.
2023-02-01 18:30:56.190 INFO org.apache.kyuubi.operation.KyuubiOperationManager: Service[KyuubiOperationManager] is started.
2023-02-01 18:30:56.190 INFO org.apache.kyuubi.session.KyuubiSessionManager: Service[KyuubiSessionManager] is started.
2023-02-01 18:30:56.190 INFO org.apache.kyuubi.server.KyuubiServer: Service[KyuubiBackendService] is started.
2023-02-01 18:30:56.191 INFO org.apache.kyuubi.server.KyuubiTBinaryFrontendService: Starting and exposing JDBC connection at: jdbc:hive2://172.17.16.175:10009/
2023-02-01 18:30:56.232 INFO org.apache.kyuubi.ha.client.zookeeper.ZookeeperDiscoveryClient: Created a /kyuubi-dev/serviceUri=172.17.16.175:10009;version=1.6.1-incubating;sequence=0000000545 on ZooKeeper for KyuubiServer uri: 172.17.16.175:10009
2023-02-01 18:30:56.234 INFO org.apache.kyuubi.ha.client.KyuubiServiceDiscovery: Service[KyuubiServiceDiscovery] is started.
2023-02-01 18:30:56.235 INFO org.apache.kyuubi.server.KyuubiTBinaryFrontendService: Service[KyuubiTBinaryFrontend] is started.
2023-02-01 18:30:56.235 INFO org.apache.kyuubi.server.KyuubiServer: Service[KyuubiServer] is started.
2023-02-01 18:31:19.826 INFO org.apache.kyuubi.server.KyuubiTBinaryFrontendService: Client protocol version: HIVE_CLI_SERVICE_PROTOCOL_V10
2023-02-01 18:31:19.834 INFO org.apache.kyuubi.session.KyuubiSessionManager: Opening session for g.liashchuk@172.17.17.219
2023-02-01 18:31:19.839 WARN org.apache.kyuubi.config.KyuubiConf: The Kyuubi config 'kyuubi.frontend.login.timeout' has been deprecated in Kyuubi v1.4.0 and may be removed in the future. Use kyuubi.frontend.thrift.login.timeout instead
2023-02-01 18:31:19.840 WARN org.apache.kyuubi.config.KyuubiConf: The Kyuubi config 'kyuubi.ha.zookeeper.namespace' has been deprecated in Kyuubi v1.6.0 and may be removed in the future. Use kyuubi.ha.namespace instead
2023-02-01 18:31:19.840 WARN org.apache.kyuubi.config.KyuubiConf: The Kyuubi config 'kyuubi.ha.zookeeper.quorum' has been deprecated in Kyuubi v1.6.0 and may be removed in the future. Use kyuubi.ha.addresses instead
2023-02-01 18:31:19.840 WARN org.apache.kyuubi.config.KyuubiConf: The Kyuubi config 'kyuubi.engine.connection.url.use.hostname' has been deprecated in Kyuubi v1.5.0 and may be removed in the future. Use kyuubi.frontend.connection.url.use.hostname instead
2023-02-01 18:31:19.841 WARN org.apache.kyuubi.config.KyuubiConf: The Kyuubi config 'kyuubi.frontend.bind.port' has been deprecated in Kyuubi v1.4.0 and may be removed in the future. Use kyuubi.frontend.thrift.binary.bind.port instead
2023-02-01 18:31:19.858 INFO org.apache.kyuubi.operation.log.OperationLog: Creating operation log file /opt/kyuubi/work/server_operation_logs/fe3a48de-0484-45be-96d3-aa6a62187b31/572f8e42-8b76-4eaf-8761-e94459a124d8
2023-02-01 18:31:19.861 INFO org.apache.kyuubi.operation.LaunchEngine: Processing g.liashchuk's query[572f8e42-8b76-4eaf-8761-e94459a124d8]: PENDING_STATE -> RUNNING_STATE, statement:
LaunchEngine
2023-02-01 18:31:19.861 INFO org.apache.kyuubi.session.KyuubiSessionManager: g.liashchuk's session with SessionHandle [fe3a48de-0484-45be-96d3-aa6a62187b31] is opened, current opening sessions 1
2023-02-01 18:31:19.862 INFO org.apache.curator.framework.imps.CuratorFrameworkImpl: Starting
2023-02-01 18:31:19.862 INFO org.apache.zookeeper.ZooKeeper: Initiating client connection, connectString=zookeeper-connect.kafka.svc:2181 sessionTimeout=60000 watcher=org.apache.curator.ConnectionState@5ef02172
2023-02-01 18:31:19.865 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server zookeeper-connect.kafka.svc/10.10.179.152:2181. Will not attempt to authenticate using SASL (unknown error)
2023-02-01 18:31:19.866 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to zookeeper-connect.kafka.svc/10.10.179.152:2181, initiating session
2023-02-01 18:31:19.867 INFO org.apache.zookeeper.ClientCnxn: Session establishment complete on server zookeeper-connect.kafka.svc/10.10.179.152:2181, sessionid = 0x200cbf568d052da, negotiated timeout = 40000
2023-02-01 18:31:19.867 INFO org.apache.curator.framework.state.ConnectionStateManager: State change: CONNECTED
2023-02-01 18:31:19.888 INFO org.apache.kyuubi.engine.EngineRef: Launching engine:
/opt/spark-3.2.2-bin-hadoop3.2/bin/spark-submit \
    --class org.apache.kyuubi.engine.spark.SparkSQLEngine \
    --conf spark.kubernetes.executor.request.cores=3 \
    --conf spark.dynamicAllocation.shuffleTracking.enabled=true \
    --conf spark.driver.host=172.17.16.175 \
    --conf spark.kyuubi.metrics.prometheus.path=/metrics \
    --conf spark.kyuubi.metrics.json.interval=PT10S \
    --conf spark.kubernetes.namespace=kyuubi-dev \
    --conf spark.eventLog.enabled=true \
    --conf spark.hive.server2.thrift.resultset.default.fetch.size=1000 \
    --conf spark.driver.maxResultSize=1g \
    --conf spark.kubernetes.executor.label.spark-component=spark-job \
    --conf spark.kyuubi.ha.zookeeper.quorum=zookeeper-connect.kafka.svc:2181 \
    --conf spark.kyuubi.session.idle.timeout=PT30M \
    --conf spark.eventLog.compression.codec=snappy \
    --conf spark.sql.catalog.spark_catalog=org.apache.spark.sql.delta.catalog.DeltaCatalog \
    --conf spark.kyuubi.metrics.prometheus.port=10019 \
    --conf spark.kyuubi.engine.submit.time=1675269079873 \
    --conf spark.kubernetes.driver.label.spark-component=spark-job \
    --conf spark.kubernetes.driver.label.kyuubi-unique-tag=fe3a48de-0484-45be-96d3-aa6a62187b31 \
    --conf spark.ui.enabled=false \
    --conf spark.app.name=kyuubi_CONNECTION_SPARK_SQL_g.liashchuk_fe3a48de-0484-45be-96d3-aa6a62187b31 \
    --conf spark.dynamicAllocation.schedulerBacklogTimeout=2s \
    --conf spark.hadoop.fs.s3a.secret.key=******** \
    --conf spark.driver.memory=2g \
    --conf spark.decommission.enabled=true \
    --conf spark.hadoop.fs.s3a.access.key=******** \
    --conf spark.kyuubi.metrics.reporters=PROMETHEUS,JSON \
    --conf spark.kubernetes.memoryOverheadFactor=0.4 \
    --conf spark.kubernetes.authenticate.serviceAccountName=kyuubi-spark \
    --conf spark.kyuubi.ha.engine.ref.id=fe3a48de-0484-45be-96d3-aa6a62187b31 \
    --conf spark.kubernetes.container.image.pullPolicy=Always \
    --conf spark.dynamicAllocation.maxExecutors=3 \
    --conf spark.kubernetes.container.image=registry.fozzy.lan/palefat/spark:spark3.3.0-hadoop3-delta2.1.0-scala2.12 \
    --conf spark.dynamicAllocation.executorAllocationRatio=0.75 \
    --conf spark.kubernetes.executor.limit.cores=3 \
    --conf spark.driver.cores=1 \
    --conf spark.ui.prometheus.enabled=true \
    --conf spark.master=k8s://https://kubernetes.default.svc \
    --conf spark.kubernetes.executor.podNamePrefix=kyuubi-sql \
    --conf spark.kyuubi.engine.share.level=CONNECTION \
    --conf spark.kubernetes.container.image.pullSecrets=**** \
    --conf spark.kubernetes.allocation.maxPendingPods=1 \
    --conf spark.kyuubi.metrics.json.location=/metrics \
    --conf spark.kyuubi.ha.zookeeper.namespace=kyuubi-dev \
    --conf spark.kyuubi.metrics.enabled=true \
    --conf spark.eventLog.dir=file:///opt/kyuubi/spark-history \
    --conf spark.dynamicAllocation.enabled=true \
    --conf spark.eventLog.compress=true \
    --conf spark.executor.cores=3 \
    --conf spark.kyuubi.client.ipAddress=172.17.17.219 \
    --conf spark.cleaner.periodicGC.interval=10min \
    --conf spark.kubernetes.dynamicAllocation.deleteGracePeriod=20s \
    --conf spark.dynamicAllocation.minExecutors=1 \
    --conf spark.sql.extensions=io.delta.sql.DeltaSparkSessionExtension \
    --conf spark.kyuubi.engine.connection.url.use.hostname=false \
    --conf spark.kyuubi.ha.namespace=/kyuubi-dev_1.6.1-incubating_CONNECTION_SPARK_SQL/g.liashchuk/fe3a48de-0484-45be-96d3-aa6a62187b31 \
    --conf spark.eventLog.rolling.enabled=true \
    --conf spark.kyuubi.frontend.login.timeout=PT40S \
    --conf spark.kubernetes.driver.pod.name=kyuubi-server-0 \
    --conf spark.kubernetes.driverEnv.SPARK_USER_NAME=g.liashchuk \
    --conf spark.executorEnv.SPARK_USER_NAME=g.liashchuk \
    --proxy-user g.liashchuk /opt/kyuubi/externals/engines/spark/kyuubi-spark-sql-engine_2.12-1.6.1-incubating.jar
2023-02-01 18:31:19.894 INFO org.apache.kyuubi.engine.ProcBuilder: Logging to /opt/kyuubi/work/g.liashchuk/kyuubi-spark-sql-engine.log.0
2023-02-01 18:31:19.961 INFO org.apache.kyuubi.server.KyuubiTBinaryFrontendService: Client information for SessionHandle [fe3a48de-0484-45be-96d3-aa6a62187b31]: ApplicationName = IntelliJ IDEA 2022.3.1
2023-02-01 18:31:20.204 INFO org.apache.kyuubi.operation.log.OperationLog: Creating operation log file /opt/kyuubi/work/server_operation_logs/fe3a48de-0484-45be-96d3-aa6a62187b31/9b02e2ce-78e2-4404-81d5-70c29eff7f94
2023-02-01 18:31:20.205 INFO org.apache.kyuubi.session.KyuubiSessionImpl: [g.liashchuk:172.17.17.219] SessionHandle [fe3a48de-0484-45be-96d3-aa6a62187b31] - Starting to wait the launch engine operation finished
2023-02-01 18:34:21.157 INFO org.apache.curator.framework.imps.CuratorFrameworkImpl: backgroundOperationsLoop exiting
2023-02-01 18:34:21.160 INFO org.apache.zookeeper.ZooKeeper: Session: 0x200cbf568d052da closed
2023-02-01 18:34:21.160 INFO org.apache.zookeeper.ClientCnxn: EventThread shut down for session: 0x200cbf568d052da
2023-02-01 18:34:21.164 INFO org.apache.kyuubi.operation.LaunchEngine: Processing g.liashchuk's query[572f8e42-8b76-4eaf-8761-e94459a124d8]: RUNNING_STATE -> ERROR_STATE, time taken: 181.303 seconds
2023-02-01 18:34:21.165 ERROR org.apache.kyuubi.server.KyuubiTBinaryFrontendService: Error executing statement: 
java.util.concurrent.ExecutionException: org.apache.kyuubi.KyuubiSQLException: Timeout(180000 ms, you can modify kyuubi.session.engine.initialize.timeout to change it) to launched SPARK_SQL engine with /opt/spark-3.2.2-bin-hadoop3.2/bin/spark-submit \
    --class org.apache.kyuubi.engine.spark.SparkSQLEngine \
    --conf spark.kubernetes.executor.request.cores=3 \
    --conf spark.dynamicAllocation.shuffleTracking.enabled=true \
    --conf spark.driver.host=172.17.16.175 \
    --conf spark.kyuubi.metrics.prometheus.path=/metrics \
    --conf spark.kyuubi.metrics.json.interval=PT10S \
    --conf spark.kubernetes.namespace=kyuubi-dev \
    --conf spark.eventLog.enabled=true \
    --conf spark.hive.server2.thrift.resultset.default.fetch.size=1000 \
    --conf spark.driver.maxResultSize=1g \
    --conf spark.kubernetes.executor.label.spark-component=spark-job \
    --conf spark.kyuubi.ha.zookeeper.quorum=zookeeper-connect.kafka.svc:2181 \
    --conf spark.kyuubi.session.idle.timeout=PT30M \
    --conf spark.eventLog.compression.codec=snappy \
    --conf spark.sql.catalog.spark_catalog=org.apache.spark.sql.delta.catalog.DeltaCatalog \
    --conf spark.kyuubi.metrics.prometheus.port=10019 \
    --conf spark.kyuubi.engine.submit.time=1675269079873 \
    --conf spark.kubernetes.driver.label.spark-component=spark-job \
    --conf spark.kubernetes.driver.label.kyuubi-unique-tag=fe3a48de-0484-45be-96d3-aa6a62187b31 \
    --conf spark.ui.enabled=false \
    --conf spark.app.name=kyuubi_CONNECTION_SPARK_SQL_g.liashchuk_fe3a48de-0484-45be-96d3-aa6a62187b31 \
    --conf spark.dynamicAllocation.schedulerBacklogTimeout=2s \
    --conf spark.hadoop.fs.s3a.secret.key=******** \
    --conf spark.driver.memory=2g \
    --conf spark.decommission.enabled=true \
    --conf spark.hadoop.fs.s3a.access.key=******** \
    --conf spark.kyuubi.metrics.reporters=PROMETHEUS,JSON \
    --conf spark.kubernetes.memoryOverheadFactor=0.4 \
    --conf spark.kubernetes.authenticate.serviceAccountName=kyuubi-spark \
    --conf spark.kyuubi.ha.engine.ref.id=fe3a48de-0484-45be-96d3-aa6a62187b31 \
    --conf spark.kubernetes.container.image.pullPolicy=Always \
    --conf spark.dynamicAllocation.maxExecutors=3 \
    --conf spark.kubernetes.container.image=registry.fozzy.lan/palefat/spark:spark3.3.0-hadoop3-delta2.1.0-scala2.12 \
    --conf spark.dynamicAllocation.executorAllocationRatio=0.75 \
    --conf spark.kubernetes.executor.limit.cores=3 \
    --conf spark.driver.cores=1 \
    --conf spark.ui.prometheus.enabled=true \
    --conf spark.master=k8s://https://kubernetes.default.svc \
    --conf spark.kubernetes.executor.podNamePrefix=kyuubi-sql \
    --conf spark.kyuubi.engine.share.level=CONNECTION \
    --conf spark.kubernetes.container.image.pullSecrets=******** \
    --conf spark.kubernetes.allocation.maxPendingPods=1 \
    --conf spark.kyuubi.metrics.json.location=/metrics \
    --conf spark.kyuubi.ha.zookeeper.namespace=kyuubi-dev \
    --conf spark.kyuubi.metrics.enabled=true \
    --conf spark.eventLog.dir=file:///opt/kyuubi/spark-history \
    --conf spark.dynamicAllocation.enabled=true \
    --conf spark.eventLog.compress=true \
    --conf spark.executor.cores=3 \
    --conf spark.kyuubi.client.ipAddress=172.17.17.219 \
    --conf spark.cleaner.periodicGC.interval=10min \
    --conf spark.kubernetes.dynamicAllocation.deleteGracePeriod=20s \
    --conf spark.dynamicAllocation.minExecutors=1 \
    --conf spark.sql.extensions=io.delta.sql.DeltaSparkSessionExtension \
    --conf spark.kyuubi.engine.connection.url.use.hostname=false \
    --conf spark.kyuubi.ha.namespace=/kyuubi-dev_1.6.1-incubating_CONNECTION_SPARK_SQL/g.liashchuk/fe3a48de-0484-45be-96d3-aa6a62187b31 \
    --conf spark.eventLog.rolling.enabled=true \
    --conf spark.kyuubi.frontend.login.timeout=PT40S \
    --conf spark.kubernetes.driver.pod.name=kyuubi-server-0 \
    --conf spark.kubernetes.driverEnv.SPARK_USER_NAME=g.liashchuk \
    --conf spark.executorEnv.SPARK_USER_NAME=g.liashchuk \
    --proxy-user g.liashchuk /opt/kyuubi/externals/engines/spark/kyuubi-spark-sql-engine_2.12-1.6.1-incubating.jar. (false,No interface org.apache.kyuubi.engine.ApplicationOperation Service found in ServiceLoader for Some(k8s://https://kubernetes.default.svc))
    at java.util.concurrent.FutureTask.report(FutureTask.java:122) ~[?:1.8.0_342]
    at java.util.concurrent.FutureTask.get(FutureTask.java:192) ~[?:1.8.0_342]
    at org.apache.kyuubi.session.KyuubiSessionImpl.$anonfun$waitForEngineLaunched$1(KyuubiSessionImpl.scala:163) ~[kyuubi-server_2.12-1.6.1-incubating.jar:1.6.1-incubating]
    at org.apache.kyuubi.session.KyuubiSessionImpl.$anonfun$waitForEngineLaunched$1$adapted(KyuubiSessionImpl.scala:159) ~[kyuubi-server_2.12-1.6.1-incubating.jar:1.6.1-incubating]
    at scala.Option.foreach(Option.scala:407) ~[scala-library-2.12.15.jar:?]
    at org.apache.kyuubi.session.KyuubiSessionImpl.waitForEngineLaunched(KyuubiSessionImpl.scala:159) ~[kyuubi-server_2.12-1.6.1-incubating.jar:1.6.1-incubating]
    at org.apache.kyuubi.session.KyuubiSessionImpl.runOperation(KyuubiSessionImpl.scala:149) ~[kyuubi-server_2.12-1.6.1-incubating.jar:1.6.1-incubating]
    at org.apache.kyuubi.session.AbstractSession.$anonfun$executeStatement$1(AbstractSession.scala:129) ~[kyuubi-common_2.12-1.6.1-incubating.jar:1.6.1-incubating]
    at org.apache.kyuubi.session.AbstractSession.withAcquireRelease(AbstractSession.scala:80) ~[kyuubi-common_2.12-1.6.1-incubating.jar:1.6.1-incubating]
    at org.apache.kyuubi.session.AbstractSession.executeStatement(AbstractSession.scala:126) ~[kyuubi-common_2.12-1.6.1-incubating.jar:1.6.1-incubating]
    at org.apache.kyuubi.service.AbstractBackendService.executeStatement(AbstractBackendService.scala:66) ~[kyuubi-common_2.12-1.6.1-incubating.jar:1.6.1-incubating]
    at org.apache.kyuubi.server.KyuubiServer$$anon$1.org$apache$kyuubi$server$BackendServiceMetric$$super$executeStatement(KyuubiServer.scala:111) ~[kyuubi-server_2.12-1.6.1-incubating.jar:1.6.1-incubating]
    at org.apache.kyuubi.server.BackendServiceMetric.$anonfun$executeStatement$1(BackendServiceMetric.scala:62) ~[kyuubi-server_2.12-1.6.1-incubating.jar:1.6.1-incubating]
    at org.apache.kyuubi.metrics.MetricsSystem$.timerTracing(MetricsSystem.scala:111) ~[kyuubi-metrics_2.12-1.6.1-incubating.jar:1.6.1-incubating]
    at org.apache.kyuubi.server.BackendServiceMetric.executeStatement(BackendServiceMetric.scala:62) ~[kyuubi-server_2.12-1.6.1-incubating.jar:1.6.1-incubating]
    at org.apache.kyuubi.server.BackendServiceMetric.executeStatement$(BackendServiceMetric.scala:55) ~[kyuubi-server_2.12-1.6.1-incubating.jar:1.6.1-incubating]
    at org.apache.kyuubi.server.KyuubiServer$$anon$1.executeStatement(KyuubiServer.scala:111) ~[kyuubi-server_2.12-1.6.1-incubating.jar:1.6.1-incubating]
    at org.apache.kyuubi.service.TFrontendService.ExecuteStatement(TFrontendService.scala:240) ~[kyuubi-common_2.12-1.6.1-incubating.jar:1.6.1-incubating]
    at org.apache.hive.service.rpc.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1557) ~[hive-service-rpc-3.1.3.jar:3.1.3]
    at org.apache.hive.service.rpc.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1542) ~[hive-service-rpc-3.1.3.jar:3.1.3]
    at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) ~[libthrift-0.9.3.jar:0.9.3]
    at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) ~[libthrift-0.9.3.jar:0.9.3]
    at org.apache.kyuubi.service.authentication.TSetIpAddressProcessor.process(TSetIpAddressProcessor.scala:36) ~[kyuubi-common_2.12-1.6.1-incubating.jar:1.6.1-incubating]
    at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) ~[libthrift-0.9.3.jar:0.9.3]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_342]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_342]
    at java.lang.Thread.run(Thread.java:750) ~[?:1.8.0_342]
Caused by: org.apache.kyuubi.KyuubiSQLException: Timeout(180000 ms, you can modify kyuubi.session.engine.initialize.timeout to change it) to launched SPARK_SQL engine with /opt/spark-3.2.2-bin-hadoop3.2/bin/spark-submit \
    --class org.apache.kyuubi.engine.spark.SparkSQLEngine \
    --conf spark.kubernetes.executor.request.cores=3 \
    --conf spark.dynamicAllocation.shuffleTracking.enabled=true \
    --conf spark.driver.host=172.17.16.175 \
    --conf spark.kyuubi.metrics.prometheus.path=/metrics \
    --conf spark.kyuubi.metrics.json.interval=PT10S \
    --conf spark.kubernetes.namespace=kyuubi-dev \
    --conf spark.eventLog.enabled=true \
    --conf spark.hive.server2.thrift.resultset.default.fetch.size=1000 \
    --conf spark.driver.maxResultSize=1g \
    --conf spark.kubernetes.executor.label.spark-component=spark-job \
    --conf spark.kyuubi.ha.zookeeper.quorum=zookeeper-connect.kafka.svc:2181 \
    --conf spark.kyuubi.session.idle.timeout=PT30M \
    --conf spark.eventLog.compression.codec=snappy \
    --conf spark.sql.catalog.spark_catalog=org.apache.spark.sql.delta.catalog.DeltaCatalog \
    --conf spark.kyuubi.metrics.prometheus.port=10019 \
    --conf spark.kyuubi.engine.submit.time=1675269079873 \
    --conf spark.kubernetes.driver.label.spark-component=spark-job \
    --conf spark.kubernetes.driver.label.kyuubi-unique-tag=fe3a48de-0484-45be-96d3-aa6a62187b31 \
    --conf spark.ui.enabled=false \
    --conf spark.app.name=kyuubi_CONNECTION_SPARK_SQL_g.liashchuk_fe3a48de-0484-45be-96d3-aa6a62187b31 \
    --conf spark.dynamicAllocation.schedulerBacklogTimeout=2s \
    --conf spark.hadoop.fs.s3a.secret.key=******** \
    --conf spark.driver.memory=2g \
    --conf spark.decommission.enabled=true \
    --conf spark.hadoop.fs.s3a.access.key=******** \
    --conf spark.kyuubi.metrics.reporters=PROMETHEUS,JSON \
    --conf spark.kubernetes.memoryOverheadFactor=0.4 \
    --conf spark.kubernetes.authenticate.serviceAccountName=kyuubi-spark \
    --conf spark.kyuubi.ha.engine.ref.id=fe3a48de-0484-45be-96d3-aa6a62187b31 \
    --conf spark.kubernetes.container.image.pullPolicy=Always \
    --conf spark.dynamicAllocation.maxExecutors=3 \
    --conf spark.kubernetes.container.image=registry.fozzy.lan/palefat/spark:spark3.3.0-hadoop3-delta2.1.0-scala2.12 \
    --conf spark.dynamicAllocation.executorAllocationRatio=0.75 \
    --conf spark.kubernetes.executor.limit.cores=3 \
    --conf spark.driver.cores=1 \
    --conf spark.ui.prometheus.enabled=true \
    --conf spark.master=k8s://https://kubernetes.default.svc \
    --conf spark.kubernetes.executor.podNamePrefix=kyuubi-sql \
    --conf spark.kyuubi.engine.share.level=CONNECTION \
    --conf spark.kubernetes.container.image.pullSecrets=******** \
    --conf spark.kubernetes.allocation.maxPendingPods=1 \
    --conf spark.kyuubi.metrics.json.location=/metrics \
    --conf spark.kyuubi.ha.zookeeper.namespace=kyuubi-dev \
    --conf spark.kyuubi.metrics.enabled=true \
    --conf spark.eventLog.dir=file:///opt/kyuubi/spark-history \
    --conf spark.dynamicAllocation.enabled=true \
    --conf spark.eventLog.compress=true \
    --conf spark.executor.cores=3 \
    --conf spark.kyuubi.client.ipAddress=172.17.17.219 \
    --conf spark.cleaner.periodicGC.interval=10min \
    --conf spark.kubernetes.dynamicAllocation.deleteGracePeriod=20s \
    --conf spark.dynamicAllocation.minExecutors=1 \
    --conf spark.sql.extensions=io.delta.sql.DeltaSparkSessionExtension \
    --conf spark.kyuubi.engine.connection.url.use.hostname=false \
    --conf spark.kyuubi.ha.namespace=/kyuubi-dev_1.6.1-incubating_CONNECTION_SPARK_SQL/g.liashchuk/fe3a48de-0484-45be-96d3-aa6a62187b31 \
    --conf spark.eventLog.rolling.enabled=true \
    --conf spark.kyuubi.frontend.login.timeout=PT40S \
    --conf spark.kubernetes.driver.pod.name=kyuubi-server-0 \
    --conf spark.kubernetes.driverEnv.SPARK_USER_NAME=g.liashchuk \
    --conf spark.executorEnv.SPARK_USER_NAME=g.liashchuk \
    --proxy-user g.liashchuk /opt/kyuubi/externals/engines/spark/kyuubi-spark-sql-engine_2.12-1.6.1-incubating.jar. (false,No interface org.apache.kyuubi.engine.ApplicationOperation Service found in ServiceLoader for Some(k8s://https://kubernetes.default.svc))
    at org.apache.kyuubi.KyuubiSQLException$.apply(KyuubiSQLException.scala:69) ~[kyuubi-common_2.12-1.6.1-incubating.jar:1.6.1-incubating]
    at org.apache.kyuubi.engine.EngineRef.$anonfun$create$1(EngineRef.scala:234) ~[kyuubi-server_2.12-1.6.1-incubating.jar:1.6.1-incubating]
    at org.apache.kyuubi.engine.EngineRef.tryWithLock(EngineRef.scala:151) ~[kyuubi-server_2.12-1.6.1-incubating.jar:1.6.1-incubating]
    at org.apache.kyuubi.engine.EngineRef.create(EngineRef.scala:165) ~[kyuubi-server_2.12-1.6.1-incubating.jar:1.6.1-incubating]
    at org.apache.kyuubi.engine.EngineRef.$anonfun$getOrCreate$1(EngineRef.scala:260) ~[kyuubi-server_2.12-1.6.1-incubating.jar:1.6.1-incubating]
    at scala.Option.getOrElse(Option.scala:189) ~[scala-library-2.12.15.jar:?]
    at org.apache.kyuubi.engine.EngineRef.getOrCreate(EngineRef.scala:260) ~[kyuubi-server_2.12-1.6.1-incubating.jar:1.6.1-incubating]
    at org.apache.kyuubi.session.KyuubiSessionImpl.$anonfun$openEngineSession$1(KyuubiSessionImpl.scala:120) ~[kyuubi-server_2.12-1.6.1-incubating.jar:1.6.1-incubating]
    at org.apache.kyuubi.session.KyuubiSessionImpl.$anonfun$openEngineSession$1$adapted(KyuubiSessionImpl.scala:113) ~[kyuubi-server_2.12-1.6.1-incubating.jar:1.6.1-incubating]
    at org.apache.kyuubi.ha.client.DiscoveryClientProvider$.withDiscoveryClient(DiscoveryClientProvider.scala:36) ~[kyuubi-ha_2.12-1.6.1-incubating.jar:1.6.1-incubating]
    at org.apache.kyuubi.session.KyuubiSessionImpl.openEngineSession(KyuubiSessionImpl.scala:113) ~[kyuubi-server_2.12-1.6.1-incubating.jar:1.6.1-incubating]
    at org.apache.kyuubi.operation.LaunchEngine.$anonfun$runInternal$2(LaunchEngine.scala:49) ~[kyuubi-server_2.12-1.6.1-incubating.jar:1.6.1-incubating]
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_342]
    at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_342]
    ... 3 more
Caused by: org.apache.kyuubi.KyuubiSQLException: Failed to detect the root cause, please check /opt/kyuubi/work/g.liashchuk/kyuubi-spark-sql-engine.log.0 at server side if necessary. The last 10 line(s) of log are:
    at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:163)
    at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
    at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
    at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1043)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1052)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
23/02/01 18:34:20 INFO DAGScheduler: ResultStage 0 (isEmpty at KyuubiSparkUtil.scala:48) failed in 142.895 s due to Stage cancelled because SparkContext was shut down
23/02/01 18:34:20 INFO KubernetesClusterSchedulerBackend: Shutting down all executors
23/02/01 18:34:20 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asking each executor to shut down
23/02/01 18:34:20 WARN ExecutorPodsWatchSnapshotSource: Kubernetes client has been closed.
    at org.apache.kyuubi.KyuubiSQLException$.apply(KyuubiSQLException.scala:69) ~[kyuubi-common_2.12-1.6.1-incubating.jar:1.6.1-incubating]
    at org.apache.kyuubi.engine.ProcBuilder.getError(ProcBuilder.scala:263) ~[kyuubi-server_2.12-1.6.1-incubating.jar:1.6.1-incubating]
    at org.apache.kyuubi.engine.ProcBuilder.getError$(ProcBuilder.scala:256) ~[kyuubi-server_2.12-1.6.1-incubating.jar:1.6.1-incubating]
    at org.apache.kyuubi.engine.spark.SparkProcessBuilder.getError(SparkProcessBuilder.scala:36) ~[kyuubi-server_2.12-1.6.1-incubating.jar:1.6.1-incubating]
    at org.apache.kyuubi.engine.EngineRef.$anonfun$create$1(EngineRef.scala:237) ~[kyuubi-server_2.12-1.6.1-incubating.jar:1.6.1-incubating]
    at org.apache.kyuubi.engine.EngineRef.tryWithLock(EngineRef.scala:151) ~[kyuubi-server_2.12-1.6.1-incubating.jar:1.6.1-incubating]
    at org.apache.kyuubi.engine.EngineRef.create(EngineRef.scala:165) ~[kyuubi-server_2.12-1.6.1-incubating.jar:1.6.1-incubating]
    at org.apache.kyuubi.engine.EngineRef.$anonfun$getOrCreate$1(EngineRef.scala:260) ~[kyuubi-server_2.12-1.6.1-incubating.jar:1.6.1-incubating]
    at scala.Option.getOrElse(Option.scala:189) ~[scala-library-2.12.15.jar:?]
    at org.apache.kyuubi.engine.EngineRef.getOrCreate(EngineRef.scala:260) ~[kyuubi-server_2.12-1.6.1-incubating.jar:1.6.1-incubating]
    at org.apache.kyuubi.session.KyuubiSessionImpl.$anonfun$openEngineSession$1(KyuubiSessionImpl.scala:120) ~[kyuubi-server_2.12-1.6.1-incubating.jar:1.6.1-incubating]
    at org.apache.kyuubi.session.KyuubiSessionImpl.$anonfun$openEngineSession$1$adapted(KyuubiSessionImpl.scala:113) ~[kyuubi-server_2.12-1.6.1-incubating.jar:1.6.1-incubating]
    at org.apache.kyuubi.ha.client.DiscoveryClientProvider$.withDiscoveryClient(DiscoveryClientProvider.scala:36) ~[kyuubi-ha_2.12-1.6.1-incubating.jar:1.6.1-incubating]
    at org.apache.kyuubi.session.KyuubiSessionImpl.openEngineSession(KyuubiSessionImpl.scala:113) ~[kyuubi-server_2.12-1.6.1-incubating.jar:1.6.1-incubating]
    at org.apache.kyuubi.operation.LaunchEngine.$anonfun$runInternal$2(LaunchEngine.scala:49) ~[kyuubi-server_2.12-1.6.1-incubating.jar:1.6.1-incubating]
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_342]
    at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_342]
    ... 3 more
2023-02-01 18:34:21.229 INFO org.apache.kyuubi.server.KyuubiTBinaryFrontendService: Received request of closing SessionHandle [fe3a48de-0484-45be-96d3-aa6a62187b31]
2023-02-01 18:34:21.231 INFO org.apache.kyuubi.session.KyuubiSessionManager: SessionHandle [fe3a48de-0484-45be-96d3-aa6a62187b31] is closed, current opening sessions 0
2023-02-01 18:34:21.235 INFO org.apache.kyuubi.engine.ProcBuilder: Destroy the process, since waitCompletion is false.
2023-02-01 18:34:21.237 INFO org.apache.kyuubi.server.KyuubiTBinaryFrontendService: Finished closing SessionHandle [fe3a48de-0484-45be-96d3-aa6a62187b31]

Kyuubi Engine Log Output

23/02/01 18:31:20 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Missing log4j-defaults.properties
log4j:WARN No appenders could be found for logger (org.apache.kyuubi.util.SignalRegister).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
23/02/01 18:31:21 INFO SparkContext: Running Spark version 3.2.2
23/02/01 18:31:21 INFO ResourceUtils: ==============================================================
23/02/01 18:31:21 INFO ResourceUtils: No custom resources configured for spark.driver.
23/02/01 18:31:21 INFO ResourceUtils: ==============================================================
23/02/01 18:31:21 INFO SparkContext: Submitted application: kyuubi_CONNECTION_SPARK_SQL_g.liashchuk_fe3a48de-0484-45be-96d3-aa6a62187b31
23/02/01 18:31:21 INFO ResourceProfile: Default ResourceProfile created, executor resources: Map(cores -> name: cores, amount: 3, script: , vendor: , memory -> name: memory, amount: 1024, script: , vendor: , offHeap -> name: offHeap, amount: 0, script: , vendor: ), task resources: Map(cpus -> name: cpus, amount: 1.0)
23/02/01 18:31:21 INFO ResourceProfile: Limiting resource is cpus at 3 tasks per executor
23/02/01 18:31:21 INFO ResourceProfileManager: Added ResourceProfile id: 0
23/02/01 18:31:21 INFO SecurityManager: Changing view acls to: kyuubi,g.liashchuk
23/02/01 18:31:21 INFO SecurityManager: Changing modify acls to: kyuubi,g.liashchuk
23/02/01 18:31:21 INFO SecurityManager: Changing view acls groups to: 
23/02/01 18:31:21 INFO SecurityManager: Changing modify acls groups to: 
23/02/01 18:31:21 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(kyuubi, g.liashchuk); groups with view permissions: Set(); users  with modify permissions: Set(kyuubi, g.liashchuk); groups with modify permissions: Set()
23/02/01 18:31:21 INFO Utils: Successfully started service 'sparkDriver' on port 34315.
23/02/01 18:31:21 INFO SparkEnv: Registering MapOutputTracker
23/02/01 18:31:21 INFO SparkEnv: Registering BlockManagerMaster
23/02/01 18:31:21 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
23/02/01 18:31:21 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
23/02/01 18:31:21 INFO SparkEnv: Registering BlockManagerMasterHeartbeat
23/02/01 18:31:21 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-7c31bdaf-f71b-48e4-a8a6-6a9cff0995c0
23/02/01 18:31:21 INFO MemoryStore: MemoryStore started with capacity 912.3 MiB
23/02/01 18:31:21 INFO SparkEnv: Registering OutputCommitCoordinator
23/02/01 18:31:21 INFO SparkContext: Added JAR file:/opt/kyuubi/externals/engines/spark/kyuubi-spark-sql-engine_2.12-1.6.1-incubating.jar at spark://172.17.16.175:34315/jars/kyuubi-spark-sql-engine_2.12-1.6.1-incubating.jar with timestamp 1675269081462
23/02/01 18:31:21 INFO SparkKubernetesClientFactory: Auto-configuring K8S client using current context from users K8S config file
23/02/01 18:31:22 WARN Utils: spark.executor.instances less than spark.dynamicAllocation.minExecutors is invalid, ignoring its setting, please update your configs.
23/02/01 18:31:22 INFO Utils: Using initial executors = 1, max of spark.dynamicAllocation.initialExecutors, spark.dynamicAllocation.minExecutors and spark.executor.instances
23/02/01 18:31:22 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 1, known: 0, sharedSlotFromPendingPods: 1.
23/02/01 18:31:22 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml,metrics.properties
23/02/01 18:31:22 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml,metrics.properties
23/02/01 18:31:22 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
23/02/01 18:31:22 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 46367.
23/02/01 18:31:22 INFO NettyBlockTransferService: Server created on 172.17.16.175:46367
23/02/01 18:31:22 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
23/02/01 18:31:22 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 172.17.16.175, 46367, None)
23/02/01 18:31:22 INFO BlockManagerMasterEndpoint: Registering block manager 172.17.16.175:46367 with 912.3 MiB RAM, BlockManagerId(driver, 172.17.16.175, 46367, None)
23/02/01 18:31:22 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 172.17.16.175, 46367, None)
23/02/01 18:31:22 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 172.17.16.175, 46367, None)
23/02/01 18:31:22 INFO StatsdSink: StatsdSink started with prefix: ''
23/02/01 18:31:23 INFO RollingEventLogFilesWriter: Logging events to file:/opt/kyuubi/spark-history/eventlog_v2_spark-application-1675269082681/events_1_spark-application-1675269082681.snappy
23/02/01 18:31:23 WARN Utils: spark.executor.instances less than spark.dynamicAllocation.minExecutors is invalid, ignoring its setting, please update your configs.
23/02/01 18:31:23 INFO Utils: Using initial executors = 1, max of spark.dynamicAllocation.initialExecutors, spark.dynamicAllocation.minExecutors and spark.executor.instances
23/02/01 18:31:23 WARN ExecutorAllocationManager: Dynamic allocation without a shuffle service is an experimental feature.
23/02/01 18:31:25 INFO BlockManagerMaster: Removal of executor 1 requested
23/02/01 18:31:25 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 1
23/02/01 18:31:25 INFO BlockManagerMasterEndpoint: Trying to remove executor 1 from BlockManagerMaster.
23/02/01 18:31:26 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 1, known: 0, sharedSlotFromPendingPods: 1.
23/02/01 18:31:26 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml,metrics.properties
23/02/01 18:31:26 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
23/02/01 18:31:29 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 1, known: 0, sharedSlotFromPendingPods: 1.
23/02/01 18:31:29 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml,metrics.properties
23/02/01 18:31:29 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
23/02/01 18:31:29 INFO BlockManagerMaster: Removal of executor 2 requested
23/02/01 18:31:29 INFO BlockManagerMasterEndpoint: Trying to remove executor 2 from BlockManagerMaster.
23/02/01 18:31:29 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 2
23/02/01 18:31:31 INFO BlockManagerMaster: Removal of executor 3 requested
23/02/01 18:31:31 INFO BlockManagerMasterEndpoint: Trying to remove executor 3 from BlockManagerMaster.
23/02/01 18:31:31 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 3
23/02/01 18:31:32 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 1, known: 0, sharedSlotFromPendingPods: 1.
23/02/01 18:31:32 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml,metrics.properties
23/02/01 18:31:32 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
23/02/01 18:31:34 INFO BlockManagerMasterEndpoint: Trying to remove executor 4 from BlockManagerMaster.
23/02/01 18:31:34 INFO BlockManagerMaster: Removal of executor 4 requested
23/02/01 18:31:34 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 4
23/02/01 18:31:35 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 1, known: 0, sharedSlotFromPendingPods: 1.
23/02/01 18:31:35 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml,metrics.properties
23/02/01 18:31:35 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
23/02/01 18:31:39 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 1, known: 0, sharedSlotFromPendingPods: 1.
23/02/01 18:31:39 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml,metrics.properties
23/02/01 18:31:39 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
23/02/01 18:31:39 INFO BlockManagerMaster: Removal of executor 5 requested
23/02/01 18:31:39 INFO BlockManagerMasterEndpoint: Trying to remove executor 5 from BlockManagerMaster.
23/02/01 18:31:39 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 5
23/02/01 18:31:43 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 1, known: 0, sharedSlotFromPendingPods: 1.
23/02/01 18:31:43 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml,metrics.properties
23/02/01 18:31:43 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
23/02/01 18:31:43 INFO BlockManagerMaster: Removal of executor 6 requested
23/02/01 18:31:43 INFO BlockManagerMasterEndpoint: Trying to remove executor 6 from BlockManagerMaster.
23/02/01 18:31:43 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 6
23/02/01 18:31:46 INFO BlockManagerMaster: Removal of executor 7 requested
23/02/01 18:31:46 INFO BlockManagerMasterEndpoint: Trying to remove executor 7 from BlockManagerMaster.
23/02/01 18:31:46 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 7
23/02/01 18:31:47 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 1, known: 0, sharedSlotFromPendingPods: 1.
23/02/01 18:31:47 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml,metrics.properties
23/02/01 18:31:47 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
23/02/01 18:31:49 INFO BlockManagerMaster: Removal of executor 8 requested
23/02/01 18:31:49 INFO BlockManagerMasterEndpoint: Trying to remove executor 8 from BlockManagerMaster.
23/02/01 18:31:49 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 8
23/02/01 18:31:50 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 1, known: 0, sharedSlotFromPendingPods: 1.
23/02/01 18:31:50 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml,metrics.properties
23/02/01 18:31:50 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
23/02/01 18:31:52 INFO KubernetesClusterSchedulerBackend: SchedulerBackend is ready for scheduling beginning after waiting maxRegisteredResourcesWaitingTime: 30000000000(ns)
23/02/01 18:31:52 INFO SharedState: spark.sql.warehouse.dir is not set, but hive.metastore.warehouse.dir is set. Setting spark.sql.warehouse.dir to the value of hive.metastore.warehouse.dir.
23/02/01 18:31:52 WARN MetricsConfig: Cannot locate configuration: tried hadoop-metrics2-s3a-file-system.properties,hadoop-metrics2.properties
23/02/01 18:31:52 INFO MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s).
23/02/01 18:31:52 INFO MetricsSystemImpl: s3a-file-system metrics system started
23/02/01 18:31:53 INFO BlockManagerMasterEndpoint: Trying to remove executor 9 from BlockManagerMaster.
23/02/01 18:31:53 INFO BlockManagerMaster: Removal of executor 9 requested
23/02/01 18:31:53 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 9
23/02/01 18:31:53 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 1, known: 0, sharedSlotFromPendingPods: 1.
23/02/01 18:31:53 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml,metrics.properties
23/02/01 18:31:53 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
23/02/01 18:31:53 INFO SharedState: Warehouse path is 's3a://warehouse/'.
23/02/01 18:31:55 INFO HiveUtils: Initializing HiveMetastoreConnection version 2.3.9 using Spark classes.
23/02/01 18:31:55 INFO HiveConf: Found configuration file file:/opt/spark-3.2.2-bin-hadoop3.2/conf/hive-site.xml
23/02/01 18:31:56 WARN HiveConf: HiveConf of name hive.metastore.uri.selection does not exist
23/02/01 18:31:56 INFO BlockManagerMasterEndpoint: Trying to remove executor 10 from BlockManagerMaster.
23/02/01 18:31:56 INFO BlockManagerMaster: Removal of executor 10 requested
23/02/01 18:31:56 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 10
23/02/01 18:31:56 INFO HiveClientImpl: Warehouse location for Hive client (version 2.3.9) is s3a://warehouse/
23/02/01 18:31:56 INFO metastore: Trying to connect to metastore with URI thrift://hive-metastore-1.hive-metastore-hs.hive-metastore-dev.svc:9083
23/02/01 18:31:56 WARN metastore: Failed to connect to the MetaStore Server...
23/02/01 18:31:56 INFO metastore: Trying to connect to metastore with URI thrift://hive-metastore-0.hive-metastore-hs.hive-metastore-dev.svc:9083
23/02/01 18:31:56 INFO metastore: Opened a connection to metastore, current connections: 1
23/02/01 18:31:56 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 1, known: 0, sharedSlotFromPendingPods: 1.
23/02/01 18:31:56 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml,metrics.properties
23/02/01 18:31:56 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
23/02/01 18:31:56 WARN ShellBasedUnixGroupsMapping: unable to return groups for user g.liashchuk
PartialGroupNameException The user name 'g.liashchuk' is not found. id: ‘g.liashchuk’: no such user
id: ‘g.liashchuk’: no such user

    at org.apache.hadoop.security.ShellBasedUnixGroupsMapping.resolvePartialGroupNames(ShellBasedUnixGroupsMapping.java:294)
    at org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:207)
    at org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:97)
    at org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:51)
    at org.apache.hadoop.security.Groups$GroupCacheLoader.fetchGroupList(Groups.java:387)
    at org.apache.hadoop.security.Groups$GroupCacheLoader.load(Groups.java:321)
    at org.apache.hadoop.security.Groups$GroupCacheLoader.load(Groups.java:270)
    at org.apache.hadoop.thirdparty.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3529)
    at org.apache.hadoop.thirdparty.com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2278)
    at org.apache.hadoop.thirdparty.com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2155)
    at org.apache.hadoop.thirdparty.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2045)
    at org.apache.hadoop.thirdparty.com.google.common.cache.LocalCache.get(LocalCache.java:3962)
    at org.apache.hadoop.thirdparty.com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3985)
    at org.apache.hadoop.thirdparty.com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4946)
    at org.apache.hadoop.security.Groups.getGroups(Groups.java:228)
    at org.apache.hadoop.security.UserGroupInformation.getGroups(UserGroupInformation.java:1734)
    at org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1722)
    at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:496)
    at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:245)
    at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:70)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
    at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1740)
    at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:83)
    at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:133)
    at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
    at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3607)
    at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3659)
    at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3639)
    at org.apache.hadoop.hive.ql.metadata.Hive.getDatabase(Hive.java:1563)
    at org.apache.hadoop.hive.ql.metadata.Hive.databaseExists(Hive.java:1552)
    at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$databaseExists$1(HiveClientImpl.scala:396)
    at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:23)
    at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$withHiveState$1(HiveClientImpl.scala:305)
    at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:236)
    at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:235)
    at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:285)
    at org.apache.spark.sql.hive.client.HiveClientImpl.databaseExists(HiveClientImpl.scala:396)
    at org.apache.spark.sql.hive.HiveExternalCatalog.$anonfun$databaseExists$1(HiveExternalCatalog.scala:224)
    at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:23)
    at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:102)
    at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:224)
    at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:150)
    at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:140)
    at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:45)
    at org.apache.spark.sql.hive.HiveSessionStateBuilder.$anonfun$catalog$1(HiveSessionStateBuilder.scala:60)
    at org.apache.spark.sql.catalyst.catalog.SessionCatalog.externalCatalog$lzycompute(SessionCatalog.scala:118)
    at org.apache.spark.sql.catalyst.catalog.SessionCatalog.externalCatalog(SessionCatalog.scala:118)
    at org.apache.spark.sql.catalyst.catalog.SessionCatalog.listDatabases(SessionCatalog.scala:298)
    at org.apache.spark.sql.execution.datasources.v2.V2SessionCatalog.listNamespaces(V2SessionCatalog.scala:205)
    at org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension.listNamespaces(DelegatingCatalogExtension.java:116)
    at org.apache.spark.sql.execution.datasources.v2.ShowNamespacesExec.run(ShowNamespacesExec.scala:42)
    at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:43)
    at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:43)
    at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:49)
    at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:97)
    at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103)
    at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163)
    at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)
    at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
    at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:97)
    at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:93)
    at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:481)
    at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:82)
    at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:481)
    at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:30)
    at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267)
    at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263)
    at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
    at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
    at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:457)
    at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:93)
    at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:80)
    at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:78)
    at org.apache.spark.sql.Dataset.<init>(Dataset.scala:219)
    at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:99)
    at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
    at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:96)
    at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:618)
    at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
    at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:613)
    at org.apache.kyuubi.engine.spark.KyuubiSparkUtil$.$anonfun$initializeSparkSession$1(KyuubiSparkUtil.scala:48)
    at org.apache.kyuubi.engine.spark.KyuubiSparkUtil$.$anonfun$initializeSparkSession$1$adapted(KyuubiSparkUtil.scala:41)
    at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
    at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
    at org.apache.kyuubi.engine.spark.KyuubiSparkUtil$.initializeSparkSession(KyuubiSparkUtil.scala:41)
    at org.apache.kyuubi.engine.spark.SparkSQLEngine$.createSpark(SparkSQLEngine.scala:207)
    at org.apache.kyuubi.engine.spark.SparkSQLEngine$.main(SparkSQLEngine.scala:272)
    at org.apache.kyuubi.engine.spark.SparkSQLEngine.main(SparkSQLEngine.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
    at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:955)
    at org.apache.spark.deploy.SparkSubmit$$anon$1.run(SparkSubmit.scala:165)
    at org.apache.spark.deploy.SparkSubmit$$anon$1.run(SparkSubmit.scala:163)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
    at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:163)
    at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
    at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
    at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1043)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1052)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
23/02/01 18:31:56 INFO metastore: Connected to metastore.
23/02/01 18:31:56 INFO CodeGenerator: Code generated in 156.216707 ms
23/02/01 18:31:56 INFO CodeGenerator: Code generated in 7.374606 ms
23/02/01 18:31:57 INFO CodeGenerator: Code generated in 9.961461 ms
23/02/01 18:31:57 INFO SparkContext: Starting job: isEmpty at KyuubiSparkUtil.scala:48
23/02/01 18:31:57 INFO DAGScheduler: Got job 0 (isEmpty at KyuubiSparkUtil.scala:48) with 1 output partitions
23/02/01 18:31:57 INFO DAGScheduler: Final stage: ResultStage 0 (isEmpty at KyuubiSparkUtil.scala:48)
23/02/01 18:31:57 INFO DAGScheduler: Parents of final stage: List()
23/02/01 18:31:57 INFO DAGScheduler: Missing parents: List()
23/02/01 18:31:57 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[2] at isEmpty at KyuubiSparkUtil.scala:48), which has no missing parents
23/02/01 18:31:57 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 6.7 KiB, free 912.3 MiB)
23/02/01 18:31:57 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 3.6 KiB, free 912.3 MiB)
23/02/01 18:31:57 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 172.17.16.175:46367 (size: 3.6 KiB, free: 912.3 MiB)
23/02/01 18:31:57 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1478
23/02/01 18:31:57 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (MapPartitionsRDD[2] at isEmpty at KyuubiSparkUtil.scala:48) (first 15 tasks are for partitions Vector(0))
23/02/01 18:31:57 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks resource profile 0
23/02/01 18:31:59 INFO BlockManagerMaster: Removal of executor 11 requested
23/02/01 18:31:59 INFO BlockManagerMasterEndpoint: Trying to remove executor 11 from BlockManagerMaster.
23/02/01 18:31:59 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 11
23/02/01 18:31:59 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 1, known: 0, sharedSlotFromPendingPods: 1.
23/02/01 18:31:59 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml,metrics.properties
23/02/01 18:31:59 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
23/02/01 18:32:04 INFO BlockManagerMasterEndpoint: Trying to remove executor 12 from BlockManagerMaster.
23/02/01 18:32:04 INFO BlockManagerMaster: Removal of executor 12 requested
23/02/01 18:32:04 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 12
23/02/01 18:32:04 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 1, known: 0, sharedSlotFromPendingPods: 1.
23/02/01 18:32:04 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml,metrics.properties
23/02/01 18:32:04 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
23/02/01 18:32:08 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 1, known: 0, sharedSlotFromPendingPods: 1.
23/02/01 18:32:08 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml,metrics.properties
23/02/01 18:32:08 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
23/02/01 18:32:09 INFO BlockManagerMasterEndpoint: Trying to remove executor 13 from BlockManagerMaster.
23/02/01 18:32:09 INFO BlockManagerMaster: Removal of executor 13 requested
23/02/01 18:32:09 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 13
23/02/01 18:32:12 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
23/02/01 18:32:13 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 1, known: 0, sharedSlotFromPendingPods: 1.
23/02/01 18:32:13 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml,metrics.properties
23/02/01 18:32:13 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
23/02/01 18:32:13 INFO BlockManagerMasterEndpoint: Trying to remove executor 14 from BlockManagerMaster.
23/02/01 18:32:13 INFO BlockManagerMaster: Removal of executor 14 requested
23/02/01 18:32:13 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 14
23/02/01 18:32:16 INFO BlockManagerMasterEndpoint: Trying to remove executor 15 from BlockManagerMaster.
23/02/01 18:32:16 INFO BlockManagerMaster: Removal of executor 15 requested
23/02/01 18:32:16 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 15
23/02/01 18:32:16 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 1, known: 0, sharedSlotFromPendingPods: 1.
23/02/01 18:32:16 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml,metrics.properties
23/02/01 18:32:16 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
23/02/01 18:32:19 INFO BlockManagerMasterEndpoint: Trying to remove executor 16 from BlockManagerMaster.
23/02/01 18:32:19 INFO BlockManagerMaster: Removal of executor 16 requested
23/02/01 18:32:19 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 16
23/02/01 18:32:19 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 1, known: 0, sharedSlotFromPendingPods: 1.
23/02/01 18:32:19 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml,metrics.properties
23/02/01 18:32:19 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
23/02/01 18:32:22 INFO BlockManagerMasterEndpoint: Trying to remove executor 17 from BlockManagerMaster.
23/02/01 18:32:22 INFO BlockManagerMaster: Removal of executor 17 requested
23/02/01 18:32:22 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 17
23/02/01 18:32:22 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 1, known: 0, sharedSlotFromPendingPods: 1.
23/02/01 18:32:22 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml,metrics.properties
23/02/01 18:32:22 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
23/02/01 18:32:25 INFO BlockManagerMasterEndpoint: Trying to remove executor 18 from BlockManagerMaster.
23/02/01 18:32:25 INFO BlockManagerMaster: Removal of executor 18 requested
23/02/01 18:32:25 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 18
23/02/01 18:32:25 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 1, known: 0, sharedSlotFromPendingPods: 1.
23/02/01 18:32:25 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml,metrics.properties
23/02/01 18:32:25 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
23/02/01 18:32:27 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
23/02/01 18:32:29 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 1, known: 0, sharedSlotFromPendingPods: 1.
23/02/01 18:32:29 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml,metrics.properties
23/02/01 18:32:29 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
23/02/01 18:32:30 INFO BlockManagerMasterEndpoint: Trying to remove executor 19 from BlockManagerMaster.
23/02/01 18:32:30 INFO BlockManagerMaster: Removal of executor 19 requested
23/02/01 18:32:30 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 19
23/02/01 18:32:33 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 1, known: 0, sharedSlotFromPendingPods: 1.
23/02/01 18:32:33 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml,metrics.properties
23/02/01 18:32:33 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
23/02/01 18:32:33 INFO BlockManagerMasterEndpoint: Trying to remove executor 20 from BlockManagerMaster.
23/02/01 18:32:33 INFO BlockManagerMaster: Removal of executor 20 requested
23/02/01 18:32:33 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 20
23/02/01 18:32:37 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 1, known: 0, sharedSlotFromPendingPods: 1.
23/02/01 18:32:37 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml,metrics.properties
23/02/01 18:32:37 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
23/02/01 18:32:37 INFO BlockManagerMasterEndpoint: Trying to remove executor 21 from BlockManagerMaster.
23/02/01 18:32:37 INFO BlockManagerMaster: Removal of executor 21 requested
23/02/01 18:32:37 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 21
23/02/01 18:32:40 INFO BlockManagerMasterEndpoint: Trying to remove executor 22 from BlockManagerMaster.
23/02/01 18:32:40 INFO BlockManagerMaster: Removal of executor 22 requested
23/02/01 18:32:40 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 22
23/02/01 18:32:40 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 1, known: 0, sharedSlotFromPendingPods: 1.
23/02/01 18:32:40 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml,metrics.properties
23/02/01 18:32:40 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
23/02/01 18:32:42 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
23/02/01 18:32:43 INFO BlockManagerMasterEndpoint: Trying to remove executor 23 from BlockManagerMaster.
23/02/01 18:32:43 INFO BlockManagerMaster: Removal of executor 23 requested
23/02/01 18:32:43 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 23
23/02/01 18:32:43 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 1, known: 0, sharedSlotFromPendingPods: 1.
23/02/01 18:32:43 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml,metrics.properties
23/02/01 18:32:43 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
23/02/01 18:32:48 INFO BlockManagerMasterEndpoint: Trying to remove executor 24 from BlockManagerMaster.
23/02/01 18:32:48 INFO BlockManagerMaster: Removal of executor 24 requested
23/02/01 18:32:48 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 24
23/02/01 18:32:48 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 1, known: 0, sharedSlotFromPendingPods: 1.
23/02/01 18:32:48 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml,metrics.properties
23/02/01 18:32:48 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
23/02/01 18:32:51 INFO BlockManagerMaster: Removal of executor 25 requested
23/02/01 18:32:51 INFO BlockManagerMasterEndpoint: Trying to remove executor 25 from BlockManagerMaster.
23/02/01 18:32:51 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 25
23/02/01 18:32:51 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 1, known: 0, sharedSlotFromPendingPods: 1.
23/02/01 18:32:51 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml,metrics.properties
23/02/01 18:32:51 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
23/02/01 18:32:56 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 1, known: 0, sharedSlotFromPendingPods: 1.
23/02/01 18:32:56 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml,metrics.properties
23/02/01 18:32:56 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
23/02/01 18:32:56 INFO BlockManagerMasterEndpoint: Trying to remove executor 26 from BlockManagerMaster.
23/02/01 18:32:56 INFO BlockManagerMaster: Removal of executor 26 requested
23/02/01 18:32:56 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 26
23/02/01 18:32:57 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
23/02/01 18:33:00 INFO BlockManagerMasterEndpoint: Trying to remove executor 27 from BlockManagerMaster.
23/02/01 18:33:00 INFO BlockManagerMaster: Removal of executor 27 requested
23/02/01 18:33:00 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 27
23/02/01 18:33:01 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 1, known: 0, sharedSlotFromPendingPods: 1.
23/02/01 18:33:01 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml,metrics.properties
23/02/01 18:33:01 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
23/02/01 18:33:09 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 1, known: 0, sharedSlotFromPendingPods: 1.
23/02/01 18:33:09 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml,metrics.properties
23/02/01 18:33:09 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
23/02/01 18:33:09 INFO BlockManagerMasterEndpoint: Trying to remove executor 28 from BlockManagerMaster.
23/02/01 18:33:09 INFO BlockManagerMaster: Removal of executor 28 requested
23/02/01 18:33:09 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 28
23/02/01 18:33:12 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
23/02/01 18:33:14 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 1, known: 0, sharedSlotFromPendingPods: 1.
23/02/01 18:33:14 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml,metrics.properties
23/02/01 18:33:14 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
23/02/01 18:33:14 INFO BlockManagerMasterEndpoint: Trying to remove executor 29 from BlockManagerMaster.
23/02/01 18:33:14 INFO BlockManagerMaster: Removal of executor 29 requested
23/02/01 18:33:14 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 29
23/02/01 18:33:17 INFO BlockManagerMasterEndpoint: Trying to remove executor 30 from BlockManagerMaster.
23/02/01 18:33:17 INFO BlockManagerMaster: Removal of executor 30 requested
23/02/01 18:33:17 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 30
23/02/01 18:33:17 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 1, known: 0, sharedSlotFromPendingPods: 1.
23/02/01 18:33:17 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml,metrics.properties
23/02/01 18:33:17 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
23/02/01 18:33:22 INFO BlockManagerMasterEndpoint: Trying to remove executor 31 from BlockManagerMaster.
23/02/01 18:33:22 INFO BlockManagerMaster: Removal of executor 31 requested
23/02/01 18:33:22 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 31
23/02/01 18:33:22 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 1, known: 0, sharedSlotFromPendingPods: 1.
23/02/01 18:33:22 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml,metrics.properties
23/02/01 18:33:22 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
23/02/01 18:33:27 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
23/02/01 18:33:29 INFO BlockManagerMasterEndpoint: Trying to remove executor 32 from BlockManagerMaster.
23/02/01 18:33:29 INFO BlockManagerMaster: Removal of executor 32 requested
23/02/01 18:33:29 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 32
23/02/01 18:33:29 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 1, known: 0, sharedSlotFromPendingPods: 1.
23/02/01 18:33:29 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml,metrics.properties
23/02/01 18:33:29 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
23/02/01 18:33:32 INFO BlockManagerMasterEndpoint: Trying to remove executor 33 from BlockManagerMaster.
23/02/01 18:33:32 INFO BlockManagerMaster: Removal of executor 33 requested
23/02/01 18:33:32 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 33
23/02/01 18:33:32 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 1, known: 0, sharedSlotFromPendingPods: 1.
23/02/01 18:33:32 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml,metrics.properties
23/02/01 18:33:32 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
23/02/01 18:33:37 INFO BlockManagerMasterEndpoint: Trying to remove executor 34 from BlockManagerMaster.
23/02/01 18:33:37 INFO BlockManagerMaster: Removal of executor 34 requested
23/02/01 18:33:37 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 34
23/02/01 18:33:38 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 1, known: 0, sharedSlotFromPendingPods: 1.
23/02/01 18:33:38 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml,metrics.properties
23/02/01 18:33:38 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
23/02/01 18:33:42 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 1, known: 0, sharedSlotFromPendingPods: 1.
23/02/01 18:33:42 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml,metrics.properties
23/02/01 18:33:42 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
23/02/01 18:33:42 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
23/02/01 18:33:42 INFO BlockManagerMasterEndpoint: Trying to remove executor 35 from BlockManagerMaster.
23/02/01 18:33:42 INFO BlockManagerMaster: Removal of executor 35 requested
23/02/01 18:33:42 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 35
23/02/01 18:33:45 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 1, known: 0, sharedSlotFromPendingPods: 1.
23/02/01 18:33:45 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml,metrics.properties
23/02/01 18:33:45 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
23/02/01 18:33:45 INFO BlockManagerMasterEndpoint: Trying to remove executor 36 from BlockManagerMaster.
23/02/01 18:33:45 INFO BlockManagerMaster: Removal of executor 36 requested
23/02/01 18:33:45 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 36
23/02/01 18:33:49 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 1, known: 0, sharedSlotFromPendingPods: 1.
23/02/01 18:33:49 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml,metrics.properties
23/02/01 18:33:49 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
23/02/01 18:33:49 INFO BlockManagerMasterEndpoint: Trying to remove executor 37 from BlockManagerMaster.
23/02/01 18:33:49 INFO BlockManagerMaster: Removal of executor 37 requested
23/02/01 18:33:49 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 37
23/02/01 18:33:52 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 1, known: 0, sharedSlotFromPendingPods: 1.
23/02/01 18:33:52 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml,metrics.properties
23/02/01 18:33:52 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
23/02/01 18:33:52 INFO BlockManagerMasterEndpoint: Trying to remove executor 38 from BlockManagerMaster.
23/02/01 18:33:52 INFO BlockManagerMaster: Removal of executor 38 requested
23/02/01 18:33:52 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 38
23/02/01 18:33:55 INFO BlockManagerMasterEndpoint: Trying to remove executor 39 from BlockManagerMaster.
23/02/01 18:33:55 INFO BlockManagerMaster: Removal of executor 39 requested
23/02/01 18:33:55 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 39
23/02/01 18:33:55 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 1, known: 0, sharedSlotFromPendingPods: 1.
23/02/01 18:33:55 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml,metrics.properties
23/02/01 18:33:55 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
23/02/01 18:33:57 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
23/02/01 18:33:58 INFO BlockManagerMasterEndpoint: Trying to remove executor 40 from BlockManagerMaster.
23/02/01 18:33:58 INFO BlockManagerMaster: Removal of executor 40 requested
23/02/01 18:33:58 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 40
23/02/01 18:33:58 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 1, known: 0, sharedSlotFromPendingPods: 1.
23/02/01 18:33:58 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml,metrics.properties
23/02/01 18:33:58 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
23/02/01 18:34:01 INFO BlockManagerMasterEndpoint: Trying to remove executor 41 from BlockManagerMaster.
23/02/01 18:34:01 INFO BlockManagerMaster: Removal of executor 41 requested
23/02/01 18:34:01 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 41
23/02/01 18:34:01 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 1, known: 0, sharedSlotFromPendingPods: 1.
23/02/01 18:34:01 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml,metrics.properties
23/02/01 18:34:01 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
23/02/01 18:34:04 INFO BlockManagerMasterEndpoint: Trying to remove executor 42 from BlockManagerMaster.
23/02/01 18:34:04 INFO BlockManagerMaster: Removal of executor 42 requested
23/02/01 18:34:04 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 42
23/02/01 18:34:05 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 1, known: 0, sharedSlotFromPendingPods: 1.
23/02/01 18:34:05 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml,metrics.properties
23/02/01 18:34:05 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
23/02/01 18:34:09 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 1, known: 0, sharedSlotFromPendingPods: 1.
23/02/01 18:34:09 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml,metrics.properties
23/02/01 18:34:09 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
23/02/01 18:34:09 INFO BlockManagerMasterEndpoint: Trying to remove executor 43 from BlockManagerMaster.
23/02/01 18:34:09 INFO BlockManagerMaster: Removal of executor 43 requested
23/02/01 18:34:09 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 43
23/02/01 18:34:12 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
23/02/01 18:34:13 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 1, known: 0, sharedSlotFromPendingPods: 1.
23/02/01 18:34:13 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml,metrics.properties
23/02/01 18:34:13 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
23/02/01 18:34:13 INFO BlockManagerMasterEndpoint: Trying to remove executor 44 from BlockManagerMaster.
23/02/01 18:34:13 INFO BlockManagerMaster: Removal of executor 44 requested
23/02/01 18:34:13 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 44
23/02/01 18:34:16 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 1, known: 0, sharedSlotFromPendingPods: 1.
23/02/01 18:34:16 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml,metrics.properties
23/02/01 18:34:16 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
23/02/01 18:34:16 INFO BlockManagerMasterEndpoint: Trying to remove executor 45 from BlockManagerMaster.
23/02/01 18:34:16 INFO BlockManagerMaster: Removal of executor 45 requested
23/02/01 18:34:16 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 45
23/02/01 18:34:19 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 1, known: 0, sharedSlotFromPendingPods: 1.
23/02/01 18:34:19 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml,metrics.properties
23/02/01 18:34:19 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
23/02/01 18:34:19 INFO BlockManagerMasterEndpoint: Trying to remove executor 46 from BlockManagerMaster.
23/02/01 18:34:19 INFO BlockManagerMaster: Removal of executor 46 requested
23/02/01 18:34:19 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 46
23/02/01 18:34:20 ERROR SparkSQLEngine: The Engine main thread was interrupted, possibly due to `createSpark` timeout. The `kyuubi.session.engine.initialize.timeout` is (180000 ms)  and submitted at 1675269079873.
java.lang.InterruptedException
    at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1000)
    at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1308)
    at scala.concurrent.impl.Promise$DefaultPromise.tryAwait(Promise.scala:242)
    at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:258)
    at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:187)
    at org.apache.spark.util.ThreadUtils$.awaitReady(ThreadUtils.scala:334)
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:929)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2214)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2235)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2254)
    at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:492)
    at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:445)
    at org.apache.spark.sql.Dataset.$anonfun$isEmpty$1(Dataset.scala:604)
    at org.apache.spark.sql.Dataset.$anonfun$isEmpty$1$adapted(Dataset.scala:603)
    at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3706)
    at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103)
    at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163)
    at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)
    at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
    at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3704)
    at org.apache.spark.sql.Dataset.isEmpty(Dataset.scala:603)
    at org.apache.kyuubi.engine.spark.KyuubiSparkUtil$.$anonfun$initializeSparkSession$1(KyuubiSparkUtil.scala:48)
    at org.apache.kyuubi.engine.spark.KyuubiSparkUtil$.$anonfun$initializeSparkSession$1$adapted(KyuubiSparkUtil.scala:41)
    at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
    at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
    at org.apache.kyuubi.engine.spark.KyuubiSparkUtil$.initializeSparkSession(KyuubiSparkUtil.scala:41)
    at org.apache.kyuubi.engine.spark.SparkSQLEngine$.createSpark(SparkSQLEngine.scala:207)
    at org.apache.kyuubi.engine.spark.SparkSQLEngine$.main(SparkSQLEngine.scala:272)
    at org.apache.kyuubi.engine.spark.SparkSQLEngine.main(SparkSQLEngine.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
    at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:955)
    at org.apache.spark.deploy.SparkSubmit$$anon$1.run(SparkSubmit.scala:165)
    at org.apache.spark.deploy.SparkSubmit$$anon$1.run(SparkSubmit.scala:163)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
    at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:163)
    at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
    at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
    at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1043)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1052)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
23/02/01 18:34:20 INFO DAGScheduler: ResultStage 0 (isEmpty at KyuubiSparkUtil.scala:48) failed in 142.895 s due to Stage cancelled because SparkContext was shut down
23/02/01 18:34:20 INFO KubernetesClusterSchedulerBackend: Shutting down all executors
23/02/01 18:34:20 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asking each executor to shut down
23/02/01 18:34:20 WARN ExecutorPodsWatchSnapshotSource: Kubernetes client has been closed.

Kyuubi Server Configurations

## Kyuubi authentication
kyuubi.authentication              NONE

# Kyuubi Metrics
# https://kyuubi.readthedocs.io/en/latest/monitor/metrics.html
# https://kyuubi.apache.org/docs/latest/deployment/settings.html#metrics
kyuubi.metrics.enabled          true
kyuubi.metrics.reporters        PROMETHEUS,JSON
kyuubi.metrics.prometheus.path  /metrics
kyuubi.metrics.prometheus.port  10019
kyuubi.metrics.json.interval    PT10S
kyuubi.metrics.json.location    ${KYUUBI_METRICS_JSON_LOCATION}

# Kyuubi frontend
kyuubi.frontend.login.timeout               PT40S
kyuubi.frontend.thrift.binary.bind.host     ${POD_IP}
kyuubi.frontend.thrift.binary.bind.port     10009
kyuubi.session.idle.timeout                 PT30M

kyuubi.ha.zookeeper.quorum          ${ZOOKEEPER_CONNECT}
kyuubi.ha.zookeeper.namespace           ${POD_NAMESPACE}
kyuubi.engine.connection.url.use.hostname         false
kyuubi.engine.share.level              CONNECTION

# ============ Spark Config ============

spark.master                                        k8s://https://kubernetes.default.svc
spark.driver.host                                   ${POD_IP}
spark.driver.cores                                  1
spark.executor.cores                                3
spark.kubernetes.executor.limit.cores               3
spark.kubernetes.executor.request.cores             3
spark.driver.memory                                 2g
spark.driver.maxResultSize                          1g
spark.kubernetes.driver.pod.name                    ${POD_NAME}
spark.kubernetes.executor.podNamePrefix             kyuubi-sql
spark.kubernetes.container.image                    <REDUCTED>/spark:spark3.3.0-hadoop3-delta2.1.0-scala2.12
spark.kubernetes.container.image.pullPolicy         Always
spark.kubernetes.container.image.pullSecrets        <REDUCTED>
spark.kubernetes.namespace                          ${POD_NAMESPACE}
spark.kubernetes.authenticate.serviceAccountName    kyuubi-spark
spark.kubernetes.driver.label.spark-component       spark-job
spark.kubernetes.executor.label.spark-component     spark-job
spark.kubernetes.memoryOverheadFactor               0.4
spark.ui.prometheus.enabled                         true
spark.decommission.enabled                          true

#Dynamic allocation
spark.dynamicAllocation.enabled                     true
spark.dynamicAllocation.shuffleTracking.enabled     true
spark.dynamicAllocation.schedulerBacklogTimeout     2s
spark.dynamicAllocation.minExecutors                1
spark.dynamicAllocation.maxExecutors                3
spark.cleaner.periodicGC.interval                   10min
spark.dynamicAllocation.executorAllocationRatio     0.75
spark.kubernetes.dynamicAllocation.deleteGracePeriod    20s
spark.kubernetes.allocation.maxPendingPods          1

# Delta
spark.sql.extensions                                io.delta.sql.DeltaSparkSessionExtension
spark.sql.catalog.spark_catalog                     org.apache.spark.sql.delta.catalog.DeltaCatalog

# TPC-DS
#spark.sql.catalog.tpcds                             org.apache.kyuubi.spark.connector.tpcds.TPCDSCatalog
#spark.jars                                          /opt/kyuubi/jars/kyuubi-spark-connector-tpcds_2.12-1.6.1-incubating.jar

# Log into Spark History Server
spark.eventLog.enabled                            true
spark.eventLog.dir                                file://${SPARK_EVENT_LOG_DIR}
spark.eventLog.compress                           true
spark.eventLog.compression.codec                  snappy
spark.eventLog.rolling.enabled                    true
spark.ui.enabled                                  false

Kyuubi Engine Configurations

No response

Additional context

No response

Are you willing to submit PR?

pan3793 commented 1 year ago

1.6.0 doesn't work without this podNamePrefix parameter as the name generated by the Kyuubi server doesn't conform to the k8s naming conventions. 1.6.1 doesn't work even with this parameter.

What's the exact error? Is it https://issues.apache.org/jira/browse/SPARK-40869?

hanna-liashchuk commented 1 year ago

@pan3793 nope, it's Unrecognized options: --podName I've attached the log in the description

pan3793 commented 1 year ago

1.6.0 doesn't work without this podNamePrefix parameter as the name generated by the Kyuubi server doesn't conform to the k8s naming conventions. 1.6.1 doesn't work even with this parameter.

I see you pasted the executor error logs, but could you please elaborate more about the details? e.g. the kyuubi generated spark-submit command, the driver logs.

hanna-liashchuk commented 1 year ago

@pan3793 sorry, I've updated the ticket with server and engine logs since we are running in k8s, spark driver pod == kyuubi server pod. If you know any other option, please share :)

hanna-liashchuk commented 1 year ago

worth mentioning that we have not only spark in Kubernetes, but the kyuubi server itself

hanna-liashchuk commented 1 year ago

so I missed the fact that my Kyuubi image has a different spark version from the spark driver image. This was the root cause Closing this issue p.s. 1.6.1 seems to be working ok with k8s, even the executor name issue is gone. Good work! :)

pan3793 commented 1 year ago

Good to know your problem is resolved :)