apache / kyuubi

Apache Kyuubi is a distributed and multi-tenant gateway to provide serverless SQL on data warehouses and lakehouses.
https://kyuubi.apache.org/
Apache License 2.0
2.1k stars 914 forks source link

[Bug] Spark Configurations Via JDBC Connection URL #2936

Closed hanna-liashchuk closed 2 years ago

hanna-liashchuk commented 2 years ago

Code of Conduct

Search before asking

Describe the bug

Seems that passing spark configurations has no effect. For example, I'm running

beeline -u jdbc:hive2://localhost:10009/default;#spark.sql.shuffle.partitions=2;spark.executor.memory=5g

but I can see from the logs that Spark engine is created without these configurations

2022-06-22 21:36:17.656 INFO state.ConnectionStateManager: State change: CONNECTED
2022-06-22 21:36:17.690 INFO engine.EngineRef: Launching engine:
/opt/spark-3.2.1-bin-hadoop3.2/bin/spark-submit \
    --class org.apache.kyuubi.engine.spark.SparkSQLEngine \
    --conf spark.driver.host=kyuubi-hs.kyuubi-test.svc \
    --conf spark.kubernetes.namespace=kyuubi-test \
    --conf spark.hive.server2.thrift.resultset.default.fetch.size=1000 \
    --conf spark.kyuubi.ha.zookeeper.quorum=zookeeper-connect.kafka.svc:2181 \
    --conf spark.driver.port=44104 \
    --conf spark.kyuubi.session.idle.timeout=PT5S \
    --conf spark.sql.catalog.spark_catalog=org.apache.spark.sql.delta.catalog.DeltaCatalog \
    --conf spark.app.name=kyuubi_USER_SPARK_SQL_anonymous_default_c379bd55-a8fe-4838-8823-e4c2c7fb970b \
    --conf spark.decommission.enabled=true \
    --conf spark.kubernetes.authenticate.serviceAccountName=kyuubi-spark \
    --conf spark.kyuubi.ha.engine.ref.id=c379bd55-a8fe-4838-8823-e4c2c7fb970b \
    --conf spark.kubernetes.container.image=spark:spark3.2.1-hadoop3.2-delta1.2.1-scala2.12 \
    --conf spark.master=k8s://https://kubernetes.default.svc \
    --conf spark.yarn.tags=KYUUBI \
    --conf spark.kyuubi.ha.zookeeper.namespace=/kyuubi_USER_SPARK_SQL/anonymous/default \
    --conf spark.sql.extensions=io.delta.sql.DeltaSparkSessionExtension \
    --conf spark.kubernetes.driver.pod.name=kyuubi-server-6964b97f8-xf6bg \
    --proxy-user anonymous /opt/kyuubi/externals/engines/spark/kyuubi-spark-sql-engine_2.12-1.4.1-incubating.jar

I'm reffering to this doc

Affects Version(s)

1.4.1-incubating

Kyuubi Server Log Output

No response

Kyuubi Engine Log Output

22/06/22 21:36:18 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
log4j:WARN No appenders could be found for logger (io.netty.util.internal.logging.InternalLoggerFactory).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
2022-06-22 21:36:19.247 INFO util.SignalRegister: Registering signal handler for TERM
2022-06-22 21:36:19.248 INFO util.SignalRegister: Registering signal handler for HUP
2022-06-22 21:36:19.248 INFO util.SignalRegister: Registering signal handler for INT
2022-06-22 21:36:19.280 INFO conf.HiveConf: Found configuration file file:/opt/spark-3.2.1-bin-hadoop3.2/conf/hive-site.xml
2022-06-22 21:36:19.394 INFO spark.SparkContext: Running Spark version 3.2.1
2022-06-22 21:36:19.421 INFO resource.ResourceUtils: ==============================================================
2022-06-22 21:36:19.422 INFO resource.ResourceUtils: No custom resources configured for spark.driver.
2022-06-22 21:36:19.422 INFO resource.ResourceUtils: ==============================================================
2022-06-22 21:36:19.423 INFO spark.SparkContext: Submitted application: kyuubi_USER_SPARK_SQL_anonymous_default_c379bd55-a8fe-4838-8823-e4c2c7fb970b
2022-06-22 21:36:19.444 INFO resource.ResourceProfile: Default ResourceProfile created, executor resources: Map(cores -> name: cores, amount: 1, script: , vendor: , memory -> name: memory, amount: 1024, script: , vendor: , offHeap -> name: offHeap, amount: 0, script: , vendor: ), task resources: Map(cpus -> name: cpus, amount: 1.0)
2022-06-22 21:36:19.457 INFO resource.ResourceProfile: Limiting resource is cpus at 1 tasks per executor
2022-06-22 21:36:19.459 INFO resource.ResourceProfileManager: Added ResourceProfile id: 0
2022-06-22 21:36:19.515 INFO spark.SecurityManager: Changing view acls to: kyuubi,anonymous
2022-06-22 21:36:19.516 INFO spark.SecurityManager: Changing modify acls to: kyuubi,anonymous
2022-06-22 21:36:19.516 INFO spark.SecurityManager: Changing view acls groups to: 
2022-06-22 21:36:19.516 INFO spark.SecurityManager: Changing modify acls groups to: 
2022-06-22 21:36:19.517 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(kyuubi, anonymous); groups with view permissions: Set(); users  with modify permissions: Set(kyuubi, anonymous); groups with modify permissions: Set()
2022-06-22 21:36:19.730 INFO util.Utils: Successfully started service 'sparkDriver' on port 44104.
2022-06-22 21:36:19.759 INFO spark.SparkEnv: Registering MapOutputTracker
2022-06-22 21:36:19.791 INFO spark.SparkEnv: Registering BlockManagerMaster
2022-06-22 21:36:19.813 INFO storage.BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
2022-06-22 21:36:19.814 INFO storage.BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
2022-06-22 21:36:19.817 INFO spark.SparkEnv: Registering BlockManagerMasterHeartbeat
2022-06-22 21:36:19.844 INFO storage.DiskBlockManager: Created local directory at /tmp/blockmgr-2a1d5526-04a0-4ed4-8668-b0296e8804d6
2022-06-22 21:36:19.866 INFO memory.MemoryStore: MemoryStore started with capacity 366.3 MiB
2022-06-22 21:36:19.882 INFO spark.SparkEnv: Registering OutputCommitCoordinator
2022-06-22 21:36:19.964 INFO util.log: Logging initialized @2125ms to org.sparkproject.jetty.util.log.Slf4jLog
2022-06-22 21:36:20.051 INFO server.Server: jetty-9.4.43.v20210629; built: 2021-06-30T11:07:22.254Z; git: 526006ecfa3af7f1a27ef3a288e2bef7ea9dd7e8; jvm 1.8.0_332-b09
2022-06-22 21:36:20.073 INFO server.Server: Started @2234ms
2022-06-22 21:36:20.107 INFO server.AbstractConnector: Started ServerConnector@27cbfddf{HTTP/1.1, (http/1.1)}{0.0.0.0:35437}
2022-06-22 21:36:20.107 INFO util.Utils: Successfully started service 'SparkUI' on port 35437.
2022-06-22 21:36:20.130 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@14faa38c{/jobs,null,AVAILABLE,@Spark}
2022-06-22 21:36:20.133 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@759d81f3{/jobs/json,null,AVAILABLE,@Spark}
2022-06-22 21:36:20.134 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5a4c638d{/jobs/job,null,AVAILABLE,@Spark}
2022-06-22 21:36:20.136 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1f12e153{/jobs/job/json,null,AVAILABLE,@Spark}
2022-06-22 21:36:20.137 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5a101b1c{/stages,null,AVAILABLE,@Spark}
2022-06-22 21:36:20.137 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@29f0802c{/stages/json,null,AVAILABLE,@Spark}
2022-06-22 21:36:20.138 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@60f2e0bd{/stages/stage,null,AVAILABLE,@Spark}
2022-06-22 21:36:20.139 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@203dd56b{/stages/stage/json,null,AVAILABLE,@Spark}
2022-06-22 21:36:20.139 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6d64b553{/stages/pool,null,AVAILABLE,@Spark}
2022-06-22 21:36:20.140 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1d3e6d34{/stages/pool/json,null,AVAILABLE,@Spark}
2022-06-22 21:36:20.140 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@26a94fa5{/storage,null,AVAILABLE,@Spark}
2022-06-22 21:36:20.141 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2873d672{/storage/json,null,AVAILABLE,@Spark}
2022-06-22 21:36:20.142 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@577f9109{/storage/rdd,null,AVAILABLE,@Spark}
2022-06-22 21:36:20.142 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@757529a4{/storage/rdd/json,null,AVAILABLE,@Spark}
2022-06-22 21:36:20.143 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5c41d037{/environment,null,AVAILABLE,@Spark}
2022-06-22 21:36:20.143 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5ec77191{/environment/json,null,AVAILABLE,@Spark}
2022-06-22 21:36:20.144 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1450078a{/executors,null,AVAILABLE,@Spark}
2022-06-22 21:36:20.144 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@69c6161d{/executors/json,null,AVAILABLE,@Spark}
2022-06-22 21:36:20.145 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2e1792e7{/executors/threadDump,null,AVAILABLE,@Spark}
2022-06-22 21:36:20.145 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3eb631b8{/executors/threadDump/json,null,AVAILABLE,@Spark}
2022-06-22 21:36:20.155 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6bff19ff{/static,null,AVAILABLE,@Spark}
2022-06-22 21:36:20.156 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3b0ca5e1{/,null,AVAILABLE,@Spark}
2022-06-22 21:36:20.157 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@54dcbb9f{/api,null,AVAILABLE,@Spark}
2022-06-22 21:36:20.158 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5bb8f9e2{/jobs/job/kill,null,AVAILABLE,@Spark}
2022-06-22 21:36:20.158 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5f78de22{/stages/stage/kill,null,AVAILABLE,@Spark}
2022-06-22 21:36:20.160 INFO ui.SparkUI: Bound SparkUI to 0.0.0.0, and started at http://kyuubi-hs.kyuubi-test.svc:35437
2022-06-22 21:36:20.173 INFO spark.SparkContext: Added JAR file:/opt/kyuubi/externals/engines/spark/kyuubi-spark-sql-engine_2.12-1.4.1-incubating.jar at spark://kyuubi-hs.kyuubi-test.svc:44104/jars/kyuubi-spark-sql-engine_2.12-1.4.1-incubating.jar with timestamp 1655922979386
2022-06-22 21:36:20.231 INFO k8s.SparkKubernetesClientFactory: Auto-configuring K8S client using current context from users K8S config file
2022-06-22 21:36:21.129 INFO k8s.ExecutorPodsAllocator: Going to request 2 executors from Kubernetes for ResourceProfile Id: 0, target: 2, known: 0, sharedSlotFromPendingPods: 2147483647.
2022-06-22 21:36:21.150 INFO submit.KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml
2022-06-22 21:36:21.173 INFO submit.KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml
2022-06-22 21:36:21.188 INFO features.BasicExecutorFeatureStep: Adding decommission script to lifecycle
2022-06-22 21:36:21.244 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 36477.
2022-06-22 21:36:21.244 INFO netty.NettyBlockTransferService: Server created on kyuubi-hs.kyuubi-test.svc:36477
2022-06-22 21:36:21.246 INFO storage.BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
2022-06-22 21:36:21.252 INFO submit.KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml
2022-06-22 21:36:21.254 INFO features.BasicExecutorFeatureStep: Adding decommission script to lifecycle
2022-06-22 21:36:21.256 INFO storage.BlockManagerMaster: Registering BlockManager BlockManagerId(driver, kyuubi-hs.kyuubi-test.svc, 36477, None)
2022-06-22 21:36:21.261 INFO storage.BlockManagerMasterEndpoint: Registering block manager kyuubi-hs.kyuubi-test.svc:36477 with 366.3 MiB RAM, BlockManagerId(driver, kyuubi-hs.kyuubi-test.svc, 36477, None)
2022-06-22 21:36:21.264 INFO storage.BlockManagerMaster: Registered BlockManager BlockManagerId(driver, kyuubi-hs.kyuubi-test.svc, 36477, None)
2022-06-22 21:36:21.265 INFO storage.BlockManager: Initialized BlockManager: BlockManagerId(driver, kyuubi-hs.kyuubi-test.svc, 36477, None)
2022-06-22 21:36:21.282 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3291b443{/metrics/json,null,AVAILABLE,@Spark}
2022-06-22 21:36:24.277 INFO k8s.ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 2, known: 1, sharedSlotFromPendingPods: 2147483647.
2022-06-22 21:36:24.279 INFO submit.KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml
2022-06-22 21:36:24.280 INFO features.BasicExecutorFeatureStep: Adding decommission script to lifecycle
2022-06-22 21:36:25.117 INFO storage.BlockManagerMaster: Removal of executor 1 requested
2022-06-22 21:36:25.117 INFO k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 1
2022-06-22 21:36:25.122 INFO storage.BlockManagerMasterEndpoint: Trying to remove executor 1 from BlockManagerMaster.
2022-06-22 21:36:25.124 INFO storage.BlockManagerMasterEndpoint: Trying to remove executor 2 from BlockManagerMaster.
2022-06-22 21:36:25.124 INFO storage.BlockManagerMaster: Removal of executor 2 requested
2022-06-22 21:36:25.124 INFO k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 2
2022-06-22 21:36:25.298 INFO k8s.ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 2, known: 1, sharedSlotFromPendingPods: 2147483646.
2022-06-22 21:36:25.300 INFO submit.KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml
2022-06-22 21:36:25.301 INFO features.BasicExecutorFeatureStep: Adding decommission script to lifecycle
2022-06-22 21:36:27.285 INFO k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (172.17.4.68:34348) with ID 3,  ResourceProfileId 0
2022-06-22 21:36:27.386 INFO storage.BlockManagerMasterEndpoint: Registering block manager 172.17.4.68:33413 with 413.9 MiB RAM, BlockManagerId(3, 172.17.4.68, 33413, None)
2022-06-22 21:36:28.199 INFO storage.BlockManagerMasterEndpoint: Trying to remove executor 4 from BlockManagerMaster.
2022-06-22 21:36:28.199 INFO storage.BlockManagerMaster: Removal of executor 4 requested
2022-06-22 21:36:28.199 INFO k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 4
2022-06-22 21:36:28.326 INFO k8s.ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 2, known: 1, sharedSlotFromPendingPods: 2147483647.
2022-06-22 21:36:28.327 INFO submit.KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml
2022-06-22 21:36:28.329 INFO features.BasicExecutorFeatureStep: Adding decommission script to lifecycle
2022-06-22 21:36:32.225 INFO storage.BlockManagerMasterEndpoint: Trying to remove executor 5 from BlockManagerMaster.
2022-06-22 21:36:32.226 INFO storage.BlockManagerMaster: Removal of executor 5 requested
2022-06-22 21:36:32.226 INFO k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 5
2022-06-22 21:36:32.350 INFO k8s.ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 2, known: 1, sharedSlotFromPendingPods: 2147483647.
2022-06-22 21:36:32.351 INFO submit.KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml
2022-06-22 21:36:32.352 INFO features.BasicExecutorFeatureStep: Adding decommission script to lifecycle
2022-06-22 21:36:35.249 INFO storage.BlockManagerMasterEndpoint: Trying to remove executor 6 from BlockManagerMaster.
2022-06-22 21:36:35.249 INFO storage.BlockManagerMaster: Removal of executor 6 requested
2022-06-22 21:36:35.249 INFO k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 6
2022-06-22 21:36:35.370 INFO k8s.ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 2, known: 1, sharedSlotFromPendingPods: 2147483647.
2022-06-22 21:36:35.371 INFO submit.KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml
2022-06-22 21:36:35.372 INFO features.BasicExecutorFeatureStep: Adding decommission script to lifecycle
2022-06-22 21:36:39.276 INFO storage.BlockManagerMasterEndpoint: Trying to remove executor 7 from BlockManagerMaster.
2022-06-22 21:36:39.276 INFO storage.BlockManagerMaster: Removal of executor 7 requested
2022-06-22 21:36:39.277 INFO k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 7
2022-06-22 21:36:39.387 INFO k8s.ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 2, known: 1, sharedSlotFromPendingPods: 2147483647.
2022-06-22 21:36:39.388 INFO submit.KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml
2022-06-22 21:36:39.389 INFO features.BasicExecutorFeatureStep: Adding decommission script to lifecycle
2022-06-22 21:36:42.300 INFO storage.BlockManagerMasterEndpoint: Trying to remove executor 8 from BlockManagerMaster.
2022-06-22 21:36:42.300 INFO storage.BlockManagerMaster: Removal of executor 8 requested
2022-06-22 21:36:42.300 INFO k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 8
2022-06-22 21:36:42.406 INFO k8s.ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 2, known: 1, sharedSlotFromPendingPods: 2147483647.
2022-06-22 21:36:42.408 INFO submit.KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml
2022-06-22 21:36:42.409 INFO features.BasicExecutorFeatureStep: Adding decommission script to lifecycle
2022-06-22 21:36:46.325 INFO storage.BlockManagerMaster: Removal of executor 9 requested
2022-06-22 21:36:46.325 INFO storage.BlockManagerMasterEndpoint: Trying to remove executor 9 from BlockManagerMaster.
2022-06-22 21:36:46.325 INFO k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 9
2022-06-22 21:36:46.424 INFO k8s.ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 2, known: 1, sharedSlotFromPendingPods: 2147483647.
2022-06-22 21:36:46.425 INFO submit.KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml
2022-06-22 21:36:46.426 INFO features.BasicExecutorFeatureStep: Adding decommission script to lifecycle
2022-06-22 21:36:49.348 INFO storage.BlockManagerMaster: Removal of executor 10 requested
2022-06-22 21:36:49.348 INFO storage.BlockManagerMasterEndpoint: Trying to remove executor 10 from BlockManagerMaster.
2022-06-22 21:36:49.348 INFO k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 10
2022-06-22 21:36:49.440 INFO k8s.ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 2, known: 1, sharedSlotFromPendingPods: 2147483647.
2022-06-22 21:36:49.442 INFO submit.KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : hive-site.xml
2022-06-22 21:36:49.443 INFO features.BasicExecutorFeatureStep: Adding decommission script to lifecycle
2022-06-22 21:36:51.120 INFO k8s.KubernetesClusterSchedulerBackend: SchedulerBackend is ready for scheduling beginning after waiting maxRegisteredResourcesWaitingTime: 30000000000(ns)
2022-06-22 21:36:51.283 INFO internal.SharedState: spark.sql.warehouse.dir is not set, but hive.metastore.warehouse.dir is set. Setting spark.sql.warehouse.dir to the value of hive.metastore.warehouse.dir.
2022-06-22 21:36:51.429 WARN impl.MetricsConfig: Cannot locate configuration: tried hadoop-metrics2-s3a-file-system.properties,hadoop-metrics2.properties
2022-06-22 21:36:51.441 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s).
2022-06-22 21:36:51.441 INFO impl.MetricsSystemImpl: s3a-file-system metrics system started
2022-06-22 21:36:51.893 INFO internal.SharedState: Warehouse path is 's3a://warehouse/'.
2022-06-22 21:36:51.915 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7f5614f9{/SQL,null,AVAILABLE,@Spark}
2022-06-22 21:36:51.916 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6c3f1658{/SQL/json,null,AVAILABLE,@Spark}
2022-06-22 21:36:51.917 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2de9ca6{/SQL/execution,null,AVAILABLE,@Spark}
2022-06-22 21:36:51.917 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2ee39e73{/SQL/execution/json,null,AVAILABLE,@Spark}
2022-06-22 21:36:51.926 INFO k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (172.17.0.212:51570) with ID 11,  ResourceProfileId 0
2022-06-22 21:36:51.926 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@c4d2c44{/static/sql,null,AVAILABLE,@Spark}
2022-06-22 21:36:52.009 INFO storage.BlockManagerMasterEndpoint: Registering block manager 172.17.0.212:33063 with 413.9 MiB RAM, BlockManagerId(11, 172.17.0.212, 33063, None)
2022-06-22 21:36:54.512 INFO hive.HiveUtils: Initializing HiveMetastoreConnection version 2.3.9 using Spark classes.
2022-06-22 21:36:54.567 INFO conf.HiveConf: Found configuration file file:/opt/spark-3.2.1-bin-hadoop3.2/conf/hive-site.xml
2022-06-22 21:36:54.762 INFO client.HiveClientImpl: Warehouse location for Hive client (version 2.3.9) is s3a://warehouse/
2022-06-22 21:36:54.829 INFO hive.metastore: Trying to connect to metastore with URI thrift://hive-metastore-cs.hive-metastore:9083
2022-06-22 21:36:54.847 INFO hive.metastore: Opened a connection to metastore, current connections: 1
2022-06-22 21:36:54.865 WARN security.ShellBasedUnixGroupsMapping: unable to return groups for user anonymous
PartialGroupNameException The user name 'anonymous' is not found. id: ‘anonymous’: no such user
id: ‘anonymous’: no such user

        at org.apache.hadoop.security.ShellBasedUnixGroupsMapping.resolvePartialGroupNames(ShellBasedUnixGroupsMapping.java:294)
        at org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:207)
        at org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:97)
        at org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:51)
        at org.apache.hadoop.security.Groups$GroupCacheLoader.fetchGroupList(Groups.java:387)
        at org.apache.hadoop.security.Groups$GroupCacheLoader.load(Groups.java:321)
        at org.apache.hadoop.security.Groups$GroupCacheLoader.load(Groups.java:270)
        at org.apache.hadoop.thirdparty.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3529)
        at org.apache.hadoop.thirdparty.com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2278)
        at org.apache.hadoop.thirdparty.com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2155)
        at org.apache.hadoop.thirdparty.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2045)
        at org.apache.hadoop.thirdparty.com.google.common.cache.LocalCache.get(LocalCache.java:3962)
        at org.apache.hadoop.thirdparty.com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3985)
        at org.apache.hadoop.thirdparty.com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4946)
        at org.apache.hadoop.security.Groups.getGroups(Groups.java:228)
        at org.apache.hadoop.security.UserGroupInformation.getGroups(UserGroupInformation.java:1734)
        at org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1722)
        at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:496)
        at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:245)
        at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:70)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1740)
        at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:83)
        at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:133)
        at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
        at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3607)
        at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3659)
        at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3639)
        at org.apache.hadoop.hive.ql.metadata.Hive.getDatabase(Hive.java:1563)
        at org.apache.hadoop.hive.ql.metadata.Hive.databaseExists(Hive.java:1552)
        at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$databaseExists$1(HiveClientImpl.scala:396)
        at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:23)
        at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$withHiveState$1(HiveClientImpl.scala:305)
        at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:236)
        at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:235)
        at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:285)
        at org.apache.spark.sql.hive.client.HiveClientImpl.databaseExists(HiveClientImpl.scala:396)
        at org.apache.spark.sql.hive.HiveExternalCatalog.$anonfun$databaseExists$1(HiveExternalCatalog.scala:224)
        at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:23)
        at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:102)
        at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:224)
        at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:150)
        at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:140)
        at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:45)
        at org.apache.spark.sql.hive.HiveSessionStateBuilder.$anonfun$catalog$1(HiveSessionStateBuilder.scala:60)
        at org.apache.spark.sql.catalyst.catalog.SessionCatalog.externalCatalog$lzycompute(SessionCatalog.scala:118)
        at org.apache.spark.sql.catalyst.catalog.SessionCatalog.externalCatalog(SessionCatalog.scala:118)
        at org.apache.spark.sql.catalyst.catalog.SessionCatalog.listDatabases(SessionCatalog.scala:298)
        at org.apache.spark.sql.execution.datasources.v2.V2SessionCatalog.listNamespaces(V2SessionCatalog.scala:205)
        at org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension.listNamespaces(DelegatingCatalogExtension.java:116)
        at org.apache.spark.sql.execution.datasources.v2.ShowNamespacesExec.run(ShowNamespacesExec.scala:42)
        at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:43)
        at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:43)
        at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:49)
        at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:110)
        at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103)
        at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163)
        at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)
        at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
        at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
        at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:110)
        at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:106)
        at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:481)
        at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:82)
        at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:481)
        at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:30)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263)
        at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
        at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
        at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:457)
        at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:106)
        at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:93)
        at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:91)
        at org.apache.spark.sql.Dataset.<init>(Dataset.scala:219)
        at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:99)
        at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
        at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:96)
        at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:618)
        at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
        at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:613)
        at org.apache.kyuubi.engine.spark.SparkSQLEngine$.$anonfun$createSpark$4(SparkSQLEngine.scala:110)
        at org.apache.kyuubi.engine.spark.SparkSQLEngine$.$anonfun$createSpark$4$adapted(SparkSQLEngine.scala:107)
        at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
        at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
        at org.apache.kyuubi.engine.spark.SparkSQLEngine$.createSpark(SparkSQLEngine.scala:107)
        at org.apache.kyuubi.engine.spark.SparkSQLEngine$.main(SparkSQLEngine.scala:157)
        at org.apache.kyuubi.engine.spark.SparkSQLEngine.main(SparkSQLEngine.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
        at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:955)
        at org.apache.spark.deploy.SparkSubmit$$anon$1.run(SparkSubmit.scala:165)
        at org.apache.spark.deploy.SparkSubmit$$anon$1.run(SparkSubmit.scala:163)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
        at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:163)
        at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
        at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
        at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1043)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1052)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
2022-06-22 21:36:54.875 INFO hive.metastore: Connected to metastore.
2022-06-22 21:36:55.199 INFO codegen.CodeGenerator: Code generated in 177.63998 ms
2022-06-22 21:36:55.251 INFO codegen.CodeGenerator: Code generated in 6.67447 ms
2022-06-22 21:36:55.428 INFO codegen.CodeGenerator: Code generated in 10.057525 ms
2022-06-22 21:36:55.494 INFO spark.SparkContext: Starting job: isEmpty at SparkSQLEngine.scala:110
2022-06-22 21:36:55.510 INFO scheduler.DAGScheduler: Got job 0 (isEmpty at SparkSQLEngine.scala:110) with 1 output partitions
2022-06-22 21:36:55.510 INFO scheduler.DAGScheduler: Final stage: ResultStage 0 (isEmpty at SparkSQLEngine.scala:110)
2022-06-22 21:36:55.510 INFO scheduler.DAGScheduler: Parents of final stage: List()
2022-06-22 21:36:55.511 INFO scheduler.DAGScheduler: Missing parents: List()
2022-06-22 21:36:55.515 INFO scheduler.DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[2] at isEmpty at SparkSQLEngine.scala:110), which has no missing parents
2022-06-22 21:36:55.596 INFO memory.MemoryStore: Block broadcast_0 stored as values in memory (estimated size 6.6 KiB, free 366.3 MiB)
2022-06-22 21:36:55.623 INFO memory.MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 3.5 KiB, free 366.3 MiB)
2022-06-22 21:36:55.624 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on kyuubi-hs.kyuubi-test.svc:36477 (size: 3.5 KiB, free: 366.3 MiB)
2022-06-22 21:36:55.627 INFO spark.SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1478
2022-06-22 21:36:55.640 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (MapPartitionsRDD[2] at isEmpty at SparkSQLEngine.scala:110) (first 15 tasks are for partitions Vector(0))
2022-06-22 21:36:55.640 INFO scheduler.TaskSchedulerImpl: Adding task set 0.0 with 1 tasks resource profile 0
2022-06-22 21:36:55.673 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0) (172.17.4.68, executor 3, partition 0, PROCESS_LOCAL, 4645 bytes) taskResourceAssignments Map()
2022-06-22 21:36:55.875 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on 172.17.4.68:33413 (size: 3.5 KiB, free: 413.9 MiB)
2022-06-22 21:36:56.557 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 894 ms on 172.17.4.68 (executor 3) (1/1)
2022-06-22 21:36:56.559 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 
2022-06-22 21:36:56.564 INFO scheduler.DAGScheduler: ResultStage 0 (isEmpty at SparkSQLEngine.scala:110) finished in 1.038 s
2022-06-22 21:36:56.567 INFO scheduler.DAGScheduler: Job 0 is finished. Cancelling potential speculative or zombie tasks for this job
2022-06-22 21:36:56.567 INFO scheduler.TaskSchedulerImpl: Killing all running tasks in stage 0: Stage finished
2022-06-22 21:36:56.568 INFO scheduler.DAGScheduler: Job 0 finished: isEmpty at SparkSQLEngine.scala:110, took 1.074265 s
2022-06-22 21:36:56.596 INFO events.EventLoggingService: Service[EventLogging] is initialized.
2022-06-22 21:36:56.596 INFO events.EventLoggingService: Service[EventLogging] is started.
2022-06-22 21:36:56.600 INFO util.ThreadUtils: SparkSQLSessionManager-exec-pool: pool size: 100, wait queue size: 100, thread keepalive time: 60000 ms
2022-06-22 21:36:56.602 INFO operation.SparkSQLOperationManager: Service[SparkSQLOperationManager] is initialized.
2022-06-22 21:36:56.603 INFO session.SparkSQLSessionManager: Service[SparkSQLSessionManager] is initialized.
2022-06-22 21:36:56.603 INFO spark.SparkSQLBackendService: Service[SparkSQLBackendService] is initialized.
2022-06-22 21:36:56.630 INFO spark.SparkThriftBinaryFrontendService: Initializing SparkThriftBinaryFrontendService on host kyuubi-server-6964b97f8-xf6bg at port 40303 with [9, 999] worker threads
2022-06-22 21:36:56.711 INFO imps.CuratorFrameworkImpl: Starting
2022-06-22 21:36:56.720 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.6.2--803c7f1a12f85978cb049af5e4ef23bd8b688715, built on 09/04/2020 12:44 GMT
2022-06-22 21:36:56.720 INFO zookeeper.ZooKeeper: Client environment:host.name=kyuubi-server-6964b97f8-xf6bg
2022-06-22 21:36:56.720 INFO zookeeper.ZooKeeper: Client environment:java.version=1.8.0_332
2022-06-22 21:36:56.720 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Azul Systems, Inc.
2022-06-22 21:36:56.720 INFO zookeeper.ZooKeeper: Client environment:java.home=/usr/lib/jvm/zulu8-ca-amd64/jre
2022-06-22 21:36:56.720 INFO zookeeper.ZooKeeper: Client environment:java.class.path=/opt/spark/conf/:/opt/spark/jars/derby-10.14.2.0.jar:/opt/spark/jars/kubernetes-model-scheduling-5.4.1.jar:/opt/spark/jars/paranamer-2.8.jar:/opt/spark/jars/delta-storage-1.2.1.jar:/opt/spark/jars/spark-hive-thriftserver_2.12-3.2.1.jar:/opt/spark/jars/parquet-hadoop-1.12.2.jar:/opt/spark/jars/kubernetes-model-admissionregistration-5.4.1.jar:/opt/spark/jars/commons-io-2.8.0.jar:/opt/spark/jars/commons-text-1.6.jar:/opt/spark/jars/oro-2.0.8.jar:/opt/spark/jars/commons-pool-1.5.4.jar:/opt/spark/jars/xbean-asm9-shaded-4.20.jar:/opt/spark/jars/curator-framework-2.13.0.jar:/opt/spark/jars/scala-xml_2.12-1.2.0.jar:/opt/spark/jars/json4s-ast_2.12-3.7.0-M11.jar:/opt/spark/jars/ivy-2.5.0.jar:/opt/spark/jars/mesos-1.4.0-shaded-protobuf.jar:/opt/spark/jars/ST4-4.0.4.jar:/opt/spark/jars/json4s-scalap_2.12-3.7.0-M11.jar:/opt/spark/jars/jersey-client-2.34.jar:/opt/spark/jars/hive-metastore-2.3.9.jar:/opt/spark/jars/aircompressor-0.21.jar:/opt/spark/jars/metrics-graphite-4.2.0.jar:/opt/spark/jars/kubernetes-model-batch-5.4.1.jar:/opt/spark/jars/leveldbjni-all-1.8.jar:/opt/spark/jars/spark-tags_2.12-3.2.1-tests.jar:/opt/spark/jars/jdo-api-3.0.1.jar:/opt/spark/jars/jackson-dataformat-yaml-2.12.3.jar:/opt/spark/jars/spark-sql_2.12-3.2.1.jar:/opt/spark/jars/hive-service-rpc-3.1.2.jar:/opt/spark/jars/zstd-jni-1.5.0-4.jar:/opt/spark/jars/wildfly-openssl-1.0.7.Final.jar:/opt/spark/jars/scala-collection-compat_2.12-2.1.1.jar:/opt/spark/jars/super-csv-2.2.0.jar:/opt/spark/jars/slf4j-api-1.7.30.jar:/opt/spark/jars/spark-repl_2.12-3.2.1.jar:/opt/spark/jars/kubernetes-model-common-5.4.1.jar:/opt/spark/jars/commons-collections-3.2.2.jar:/opt/spark/jars/snappy-java-1.1.8.4.jar:/opt/spark/jars/kubernetes-model-storageclass-5.4.1.jar:/opt/spark/jars/json-1.8.jar:/opt/spark/jars/jersey-hk2-2.34.jar:/opt/spark/jars/py4j-0.10.9.3.jar:/opt/spark/jars/jakarta.servlet-api-4.0.3.jar:/opt/spark/jars/jackson-mapper-asl-1.9.13.jar:/opt/spark/jars/jta-1.1.jar:/opt/spark/jars/parquet-format-structures-1.12.2.jar:/opt/spark/jars/hive-serde-2.3.9.jar:/opt/spark/jars/aws-java-sdk-bundle-1.11.901.jar:/opt/spark/jars/dropwizard-metrics-hadoop-metrics2-reporter-0.1.2.jar:/opt/spark/jars/hadoop-yarn-server-web-proxy-3.3.1.jar:/opt/spark/jars/compress-lzf-1.0.3.jar:/opt/spark/jars/univocity-parsers-2.9.1.jar:/opt/spark/jars/okhttp-3.12.12.jar:/opt/spark/jars/spark-mllib_2.12-3.2.1.jar:/opt/spark/jars/spark-graphx_2.12-3.2.1.jar:/opt/spark/jars/jodd-core-3.5.2.jar:/opt/spark/jars/istack-commons-runtime-3.0.8.jar:/opt/spark/jars/core-1.1.2.jar:/opt/spark/jars/spark-launcher_2.12-3.2.1.jar:/opt/spark/jars/htrace-core4-4.1.0-incubating.jar:/opt/spark/jars/kubernetes-model-extensions-5.4.1.jar:/opt/spark/jars/spire-util_2.12-0.17.0.jar:/opt/spark/jars/scala-parser-combinators_2.12-1.1.2.jar:/opt/spark/jars/jakarta.ws.rs-api-2.1.6.jar:/opt/spark/jars/kubernetes-model-policy-5.4.1.jar:/opt/spark/jars/stream-2.9.6.jar:/opt/spark/jars/json4s-core_2.12-3.7.0-M11.jar:/opt/spark/jars/hive-beeline-2.3.9.jar:/opt/spark/jars/jcl-over-slf4j-1.7.30.jar:/opt/spark/jars/hive-storage-api-2.7.2.jar:/opt/spark/jars/parquet-jackson-1.12.2.jar:/opt/spark/jars/transaction-api-1.1.jar:/opt/spark/jars/JTransforms-3.1.jar:/opt/spark/jars/threeten-extra-1.5.0.jar:/opt/spark/jars/jackson-core-asl-1.9.13.jar:/opt/spark/jars/parquet-common-1.12.2.jar:/opt/spark/jars/antlr-runtime-3.5.2.jar:/opt/spark/jars/kubernetes-model-networking-5.4.1.jar:/opt/spark/jars/hadoop-aws-3.3.1.jar:/opt/spark/jars/hk2-utils-2.6.1.jar:/opt/spark/jars/hk2-locator-2.6.1.jar:/opt/spark/jars/jakarta.annotation-api-1.3.5.jar:/opt/spark/jars/jackson-annotations-2.12.3.jar:/opt/spark/jars/orc-shims-1.6.12.jar:/opt/spark/jars/kubernetes-model-core-5.4.1.jar:/opt/spark/jars/jackson-datatype-jsr310-2.11.2.jar:/opt/spark/jars/hive-common-2.3.9.jar:/opt/spark/jars/parquet-column-1.12.2.jar:/opt/spark/jars/commons-lang-2.6.jar:/opt/spark/jars/lz4-java-1.7.1.jar:/opt/spark/jars/kubernetes-model-discovery-5.4.1.jar:/opt/spark/jars/hive-jdbc-2.3.9.jar:/opt/spark/jars/gson-2.2.4.jar:/opt/spark/jars/xz-1.8.jar:/opt/spark/jars/json4s-jackson_2.12-3.7.0-M11.jar:/opt/spark/jars/netty-all-4.1.68.Final.jar:/opt/spark/jars/jline-2.14.6.jar:/opt/spark/jars/breeze-macros_2.12-1.2.jar:/opt/spark/jars/joda-time-2.10.10.jar:/opt/spark/jars/objenesis-2.6.jar:/opt/spark/jars/tink-1.6.0.jar:/opt/spark/jars/datanucleus-core-4.1.17.jar:/opt/spark/jars/kubernetes-model-certificates-5.4.1.jar:/opt/spark/jars/jaxb-runtime-2.3.2.jar:/opt/spark/jars/hive-cli-2.3.9.jar:/opt/spark/jars/osgi-resource-locator-1.0.3.jar:/opt/spark/jars/javolution-5.5.1.jar:/opt/spark/jars/spark-hive_2.12-3.2.1.jar:/opt/spark/jars/macro-compat_2.12-1.1.1.jar:/opt/spark/jars/jakarta.xml.bind-api-2.3.2.jar:/opt/spark/jars/jersey-container-servlet-core-2.34.jar:/opt/spark/jars/flatbuffers-java-1.9.0.jar:/opt/spark/jars/scala-reflect-2.12.15.jar:/opt/spark/jars/opencsv-2.3.jar:/opt/spark/jars/bonecp-0.8.0.RELEASE.jar:/opt/spark/jars/commons-lang3-3.12.0.jar:/opt/spark/jars/hive-exec-2.3.9-core.jar:/opt/spark/jars/kubernetes-model-rbac-5.4.1.jar:/opt/spark/jars/metrics-core-4.2.0.jar:/opt/spark/jars/metrics-json-4.2.0.jar:/opt/spark/jars/annotations-17.0.0.jar:/opt/spark/jars/spark-sketch_2.12-3.2.1.jar:/opt/spark/jars/hive-shims-scheduler-2.3.9.jar:/opt/spark/jars/metrics-jvm-4.2.0.jar:/opt/spark/jars/spark-network-shuffle_2.12-3.2.1.jar:/opt/spark/jars/HikariCP-2.5.1.jar:/opt/spark/jars/guava-14.0.1.jar:/opt/spark/jars/curator-recipes-2.13.0.jar:/opt/spark/jars/kubernetes-model-coordination-5.4.1.jar:/opt/spark/jars/hive-shims-2.3.9.jar:/opt/spark/jars/janino-3.0.16.jar:/opt/spark/jars/commons-logging-1.1.3.jar:/opt/spark/jars/commons-crypto-1.1.0.jar:/opt/spark/jars/jul-to-slf4j-1.7.30.jar:/opt/spark/jars/spark-catalyst_2.12-3.2.1.jar:/opt/spark/jars/cats-kernel_2.12-2.1.1.jar:/opt/spark/jars/javassist-3.25.0-GA.jar:/opt/spark/jars/spark-yarn_2.12-3.2.1.jar:/opt/spark/jars/metrics-jmx-4.2.0.jar:/opt/spark/jars/log4j-1.2.17.jar:/opt/spark/jars/automaton-1.11-8.jar:/opt/spark/jars/arrow-memory-core-2.0.0.jar:/opt/spark/jars/arpack_combined_all-0.1.jar:/opt/spark/jars/velocity-1.5.jar:/opt/spark/jars/kubernetes-model-apiextensions-5.4.1.jar:/opt/spark/jars/commons-math3-3.4.1.jar:/opt/spark/jars/orc-mapreduce-1.6.12.jar:/opt/spark/jars/kryo-shaded-4.0.2.jar:/opt/spark/jars/breeze_2.12-1.2.jar:/opt/spark/jars/delta-core_2.12-1.2.1.jar:/opt/spark/jars/audience-annotations-0.5.0.jar:/opt/spark/jars/jersey-server-2.34.jar:/opt/spark/jars/hk2-api-2.6.1.jar:/opt/spark/jars/spire-macros_2.12-0.17.0.jar:/opt/spark/jars/hive-llap-common-2.3.9.jar:/opt/spark/jars/jaxb-api-2.2.11.jar:/opt/spark/jars/avro-1.10.2.jar:/opt/spark/jars/spark-tags_2.12-3.2.1.jar:/opt/spark/jars/orc-core-1.6.12.jar:/opt/spark/jars/okio-1.14.0.jar:/opt/spark/jars/arrow-format-2.0.0.jar:/opt/spark/jars/jackson-core-2.12.3.jar:/opt/spark/jars/commons-cli-1.2.jar:/opt/spark/jars/kubernetes-model-events-5.4.1.jar:/opt/spark/jars/zookeeper-3.6.2.jar:/opt/spark/jars/slf4j-log4j12-1.7.30.jar:/opt/spark/jars/avro-mapred-1.10.2.jar:/opt/spark/jars/parquet-encoding-1.12.2.jar:/opt/spark/jars/avro-ipc-1.10.2.jar:/opt/spark/jars/scala-compiler-2.12.15.jar:/opt/spark/jars/javax.jdo-3.2.0-m3.jar:/opt/spark/jars/hive-vector-code-gen-2.3.9.jar:/opt/spark/jars/spark-network-common_2.12-3.2.1.jar:/opt/spark/jars/arrow-vector-2.0.0.jar:/opt/spark/jars/RoaringBitmap-0.9.0.jar:/opt/spark/jars/commons-compiler-3.0.16.jar:/opt/spark/jars/shims-0.9.0.jar:/opt/spark/jars/spark-kubernetes_2.12-3.2.1.jar:/opt/spark/jars/spark-core_2.12-3.2.1.jar:/opt/spark/jars/activation-1.1.1.jar:/opt/spark/jars/lapack-2.2.1.jar:/opt/spark/jars/spire_2.12-0.17.0.jar:/opt/spark/jars/kubernetes-model-metrics-5.4.1.jar:/opt/spark/jars/spark-kvstore_2.12-3.2.1.jar:/opt/spark/jars/logging-interceptor-3.12.12.jar:/opt/spark/jars/hive-shims-0.23-2.3.9.jar:/opt/spark/jars/kubernetes-model-autoscaling-5.4.1.jar:/opt/spark/jars/protobuf-java-2.5.0.jar:/opt/spark/jars/rocksdbjni-6.20.3.jar:/opt/spark/jars/jakarta.validation-api-2.0.2.jar:/opt/spark/jars/spark-mesos_2.12-3.2.1.jar:/opt/spark/jars/spire-platform_2.12-0.17.0.jar:/opt/spark/jars/blas-2.2.1.jar:/opt/spark/jars/scala-library-2.12.14.jar:/opt/spark/jars/curator-client-2.13.0.jar:/opt/spark/jars/httpclient-4.5.13.jar:/opt/spark/jars/zjsonpatch-0.3.0.jar:/opt/spark/jars/jtds-1.3.1.jar:/opt/spark/jars/scala-library-2.12.15.jar:/opt/spark/jars/hadoop-shaded-guava-1.1.1.jar:/opt/spark/jars/generex-1.0.2.jar:/opt/spark/jars/jersey-common-2.34.jar:/opt/spark/jars/kubernetes-model-node-5.4.1.jar:/opt/spark/jars/spark-mllib-local_2.12-3.2.1.jar:/opt/spark/jars/minlog-1.3.0.jar:/opt/spark/jars/datanucleus-rdbms-4.1.19.jar:/opt/spark/jars/antlr4-runtime-4.8.jar:/opt/spark/jars/libfb303-0.9.3.jar:/opt/spark/jars/commons-dbcp-1.4.jar:/opt/spark/jars/chill-java-0.10.0.jar:/opt/spark/jars/shapeless_2.12-2.3.3.jar:/opt/spark/jars/aopalliance-repackaged-2.6.1.jar:/opt/spark/jars/jackson-databind-2.12.3.jar:/opt/spark/jars/hive-shims-common-2.3.9.jar:/opt/spark/jars/commons-net-3.1.jar:/opt/spark/jars/libthrift-0.12.0.jar:/opt/spark/jars/snakeyaml-1.27.jar:/opt/spark/jars/jpam-1.1.jar:/opt/spark/jars/pyrolite-4.30.jar:/opt/spark/jars/commons-compress-1.21.jar:/opt/spark/jars/arpack-2.2.1.jar:/opt/spark/jars/jackson-module-scala_2.12-2.12.3.jar:/opt/spark/jars/jsr305-3.0.0.jar:/opt/spark/jars/jersey-container-servlet-2.34.jar:/opt/spark/jars/hadoop-client-api-3.3.1.jar:/opt/spark/jars/kubernetes-model-flowcontrol-5.4.1.jar:/opt/spark/jars/spark-unsafe_2.12-3.2.1.jar:/opt/spark/jars/jakarta.inject-2.6.1.jar:/opt/spark/jars/chill_2.12-0.10.0.jar:/opt/spark/jars/hadoop-client-runtime-3.3.1.jar:/opt/spark/jars/JLargeArrays-1.5.jar:/opt/spark/jars/kubernetes-client-5.4.1.jar:/opt/spark/jars/arrow-memory-netty-2.0.0.jar:/opt/spark/jars/spark-streaming_2.12-3.2.1.jar:/opt/spark/jars/zookeeper-jute-3.6.2.jar:/opt/spark/jars/commons-codec-1.15.jar:/opt/spark/jars/datanucleus-api-jdo-4.2.4.jar:/opt/spark/jars/algebra_2.12-2.0.1.jar:/opt/spark/jars/stax-api-1.0.1.jar:/opt/spark/jars/kubernetes-model-apps-5.4.1.jar:/opt/spark/jars/httpcore-4.4.14.jar
2022-06-22 21:36:56.720 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2022-06-22 21:36:56.720 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
2022-06-22 21:36:56.720 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
2022-06-22 21:36:56.720 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
2022-06-22 21:36:56.720 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
2022-06-22 21:36:56.720 INFO zookeeper.ZooKeeper: Client environment:os.version=4.18.0-305.3.1.el8_4.x86_64
2022-06-22 21:36:56.720 INFO zookeeper.ZooKeeper: Client environment:user.name=kyuubi
2022-06-22 21:36:56.720 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/kyuubi
2022-06-22 21:36:56.720 INFO zookeeper.ZooKeeper: Client environment:user.dir=/opt/kyuubi/work/anonymous
2022-06-22 21:36:56.720 INFO zookeeper.ZooKeeper: Client environment:os.memory.free=133MB
2022-06-22 21:36:56.720 INFO zookeeper.ZooKeeper: Client environment:os.memory.max=910MB
2022-06-22 21:36:56.720 INFO zookeeper.ZooKeeper: Client environment:os.memory.total=366MB
2022-06-22 21:36:56.723 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=zookeeper-connect.kafka.svc:2181 sessionTimeout=60000 watcher=org.apache.kyuubi.shade.org.apache.curator.ConnectionState@37816ea6
2022-06-22 21:36:56.727 INFO common.X509Util: Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation
2022-06-22 21:36:56.731 INFO zookeeper.ClientCnxnSocket: jute.maxbuffer value is 1048575 Bytes
2022-06-22 21:36:56.737 INFO zookeeper.ClientCnxn: zookeeper.request.timeout value is 0. feature enabled=false
2022-06-22 21:36:56.744 INFO client.EngineServiceDiscovery: Service[EngineServiceDiscovery] is initialized.
2022-06-22 21:36:56.744 INFO spark.SparkThriftBinaryFrontendService: Service[SparkThriftBinaryFrontendService] is initialized.
2022-06-22 21:36:56.745 INFO spark.SparkSQLEngine: Service[SparkSQLEngine] is initialized.
2022-06-22 21:36:56.748 INFO zookeeper.ClientCnxn: Opening socket connection to server zookeeper-connect.kafka.svc/10.10.179.151:2181.
2022-06-22 21:36:56.748 INFO zookeeper.ClientCnxn: SASL config status: Will not attempt to authenticate using SASL (unknown error)
2022-06-22 21:36:56.749 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /172.17.1.205:48112, server: zookeeper-connect.kafka.svc/10.10.179.151:2181
2022-06-22 21:36:56.751 INFO operation.SparkSQLOperationManager: Service[SparkSQLOperationManager] is started.
2022-06-22 21:36:56.751 INFO session.SparkSQLSessionManager: Service[SparkSQLSessionManager] is started.
2022-06-22 21:36:56.751 INFO spark.SparkSQLBackendService: Service[SparkSQLBackendService] is started.
2022-06-22 21:36:56.755 INFO zookeeper.ClientCnxn: Session establishment complete on server zookeeper-connect.kafka.svc/10.10.179.151:2181, session id = 0x1000000d4d500b0, negotiated timeout = 40000
2022-06-22 21:36:56.765 INFO state.ConnectionStateManager: State change: CONNECTED
2022-06-22 21:36:56.767 INFO client.EngineServiceDiscovery: Zookeeper client connection state changed to: CONNECTED
2022-06-22 21:36:56.804 INFO client.ServiceDiscovery: Created a /kyuubi_USER_SPARK_SQL/anonymous/default/serviceUri=kyuubi-server-6964b97f8-xf6bg:40303;version=1.4.1-incubating;refId=c379bd55-a8fe-4838-8823-e4c2c7fb970b;sequence=0000000036 on ZooKeeper for KyuubiServer uri: kyuubi-server-6964b97f8-xf6bg:40303
2022-06-22 21:36:56.806 INFO client.EngineServiceDiscovery: Service[EngineServiceDiscovery] is started.
2022-06-22 21:36:56.807 INFO spark.SparkThriftBinaryFrontendService: Service[SparkThriftBinaryFrontendService] is started.
2022-06-22 21:36:56.807 INFO spark.SparkSQLEngine: Service[SparkSQLEngine] is started.
2022-06-22 21:36:56.807 INFO spark.SparkThriftBinaryFrontendService: Starting and exposing JDBC connection at: jdbc:hive2://kyuubi-server-6964b97f8-xf6bg:40303/
2022-06-22 21:36:56.814 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1aea759d{/kyuubi,null,AVAILABLE,@Spark}
2022-06-22 21:36:56.814 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5ead245{/kyuubi/json,null,AVAILABLE,@Spark}
2022-06-22 21:36:56.815 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1cc42abe{/kyuubi/session,null,AVAILABLE,@Spark}
2022-06-22 21:36:56.815 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@138d978e{/kyuubi/session/json,null,AVAILABLE,@Spark}
2022-06-22 21:36:56.826 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1bfb60b7{/kyuubi/stop,null,AVAILABLE,@Spark}
2022-06-22 21:36:56.832 INFO spark.SparkSQLEngine: 
    Spark application name: kyuubi_USER_SPARK_SQL_anonymous_default_c379bd55-a8fe-4838-8823-e4c2c7fb970b
          application ID:  spark-application-1655922981028
          application web UI: http://kyuubi-hs.kyuubi-test.svc:35437
          master: k8s://https://kubernetes.default.svc
          version: 3.2.1
          driver: [cpu: 1, mem: 1g]
          executor: [cpu: 2, mem: 1g, maxNum: 2]
    Start time: Wed Jun 22 21:36:19 EEST 2022

    User: anonymous (shared mode: USER)
    State: LATENT

2022-06-22 21:36:57.808 INFO storage.BlockManagerInfo: Removed broadcast_0_piece0 on kyuubi-hs.kyuubi-test.svc:36477 in memory (size: 3.5 KiB, free: 366.3 MiB)
2022-06-22 21:36:57.816 INFO storage.BlockManagerInfo: Removed broadcast_0_piece0 on 172.17.4.68:33413 in memory (size: 3.5 KiB, free: 413.9 MiB)
2022-06-22 21:36:57.820 INFO spark.SparkThriftBinaryFrontendService: Client protocol version: HIVE_CLI_SERVICE_PROTOCOL_V10
2022-06-22 21:36:57.825 INFO session.SparkSQLSessionManager: Opening session for anonymous@172.17.1.205
2022-06-22 21:36:57.977 INFO session.SparkSQLSessionManager: anonymous's session with SessionHandle [904b6fb6-bd6d-4aa2-8ee3-acaffa133820] is opened, current opening sessions 1

Kyuubi Server Configurations

kyuubi.authentication           NONE
spark.master                                        k8s://https://kubernetes.default.svc
spark.driver.host                                   kyuubi-hs.kyuubi-test.svc
spark.driver.port                                   44104
spark.kubernetes.driver.pod.name                    $HOSTNAME
spark.kubernetes.container.image                    spark:spark3.2.1-hadoop3.2-delta1.2.1-scala2.12
spark.kubernetes.namespace                          kyuubi-test
spark.kubernetes.authenticate.serviceAccountName    kyuubi-spark
spark.decommission.enabled                          true

#delta
spark.sql.extensions                                io.delta.sql.DeltaSparkSessionExtension
spark.sql.catalog.spark_catalog                     org.apache.spark.sql.delta.catalog.DeltaCatalog

kyuubi.frontend.thrift.binary.bind.host     0.0.0.0
kyuubi.frontend.thrift.binary.bind.port     10009
kyuubi.session.idle.timeout                 PT5S
#
kyuubi.ha.zookeeper.quorum  zookeeper-connect.kafka.svc:2181

Kyuubi Engine Configurations

No response

Additional context

No response

Are you willing to submit PR?

github-actions[bot] commented 2 years ago

Hello @hanna-liashchuk, Thanks for finding the time to report the issue! We really appreciate the community's efforts to improve Apache Kyuubi (Incubating).

pan3793 commented 2 years ago

beeline -u jdbc:hive2://localhost:10009/default;#spark.sql.shuffle.partitions=2;spark.executor.memory=5g

You should quote the JDBC url by ', and we recommend placing spark confs after ? instead of #

Ref: https://kyuubi.apache.org/docs/latest/client/hive_jdbc.html