Closed CrazyBeeline closed 2 months ago
Hello @CrazyBeeline, Thanks for finding the time to report the issue! We really appreciate the community's efforts to improve Apache Kyuubi.
Looks like classpath issues, we have an out-of-box sandbox env for testing purposes, you can try and apply your configurations gradually to find the bad ones https://github.com/awesome-kyuubi/hadoop-testing
kyuubi.engine.hive.java.options and kyuubi.engine.flink.java.options is empty remove it from kyuubi-defaults.conf work well
@pan3793
kyuubi.engine.hive.java.options and kyuubi.engine.flink.java.options is empty remove it from kyuubi-defaults.conf work well
Does this result in an extra :
at the end of the classpath? Can you submit a PR to fix this?
hadoop-client-api-3.3.6.jar: org.apache.kyuubi.engine.hive.HiveSQLEngine
Does this result in an extra : at the end of the classpath? Can you submit a PR to fix this?
Sorry, my understanding was wrong.
The root cause is that the empty string configured in kyuubi.engine.flink.java.options is used as the main class, resulting in the Could not find or load main class
error.
Test:
Code of Conduct
Search before asking
Describe the bug
./beeline -u "jdbc:hive2://hadoop01:2181,hadoop02:2181,hadoop03:2181/default;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=kyuubi" -n root1 --hiveconf kyuubi.engine.type=HIVE_SQL --hiveconf kyuubi.engine.hive.deploy.mode=local
Launching engine Manual(shell): it works well
/usr/java/jdk1.8.0_351-amd64/bin/java \ -Xmx1g \ -cp /usr/lib/kyuubi/externals/engines/hive/kyuubi-hive-sql-engine_2.12-1.9.1.jar:/usr/lib/hive/conf:/etc/hadoop/conf:/usr/lib/hadoop/etc/hadoop:/usr/lib/hive/lib/:/usr/lib/kyuubi/jars/commons-collections-3.2.2.jar:/usr/lib/kyuubi/jars/hadoop-client-runtime-3.3.6.jar:/usr/lib/kyuubi/jars/hadoop-client-api-3.3.6.jar: org.apache.kyuubi.engine.hive.HiveSQLEngine \ --conf kyuubi.session.user=root1 \ --conf kyuubi.engine.id=b1313aae-c609-4698-bc70-6581168961f8 \ --conf hive.engine.name=kyuubi_USER_HIVE_SQL_root1_default_b1313aae-c609-4698-bc70-6581168961f8 \ --conf hive.server2.thrift.resultset.default.fetch.size=1000 \ --conf kyuubi.backend.engine.exec.pool.keepalive.time=PT1M \ --conf kyuubi.backend.engine.exec.pool.shutdown.timeout=PT20S \ --conf kyuubi.backend.engine.exec.pool.size=100 \ --conf kyuubi.backend.engine.exec.pool.wait.queue.size=100 \ --conf kyuubi.backend.server.exec.pool.keepalive.time=PT1M \ --conf kyuubi.backend.server.exec.pool.shutdown.timeout=PT20S \ --conf kyuubi.backend.server.exec.pool.size=100 \ --conf kyuubi.backend.server.exec.pool.wait.queue.size=100 \ --conf kyuubi.batch.application.check.interval=PT10S \ --conf kyuubi.batch.application.starvation.timeout=PT3M \ --conf kyuubi.batch.session.idle.timeout=PT6H \ --conf kyuubi.client.ipAddress=192.168.1.110 \ --conf kyuubi.client.version=1.9.1 \ --conf kyuubi.engine.event.json.log.path=/var/lib/kyuubi/engine/event \ --conf kyuubi.engine.flink.application.jars= \ --conf kyuubi.engine.flink.extra.classpath= \ --conf kyuubi.engine.flink.java.options= \ --conf kyuubi.engine.flink.memory=1g \ --conf kyuubi.engine.hive.deploy.mode=local \ --conf kyuubi.engine.hive.event.loggers=JSON \ --conf kyuubi.engine.hive.extra.classpath= \ --conf kyuubi.engine.hive.java.options= \ --conf kyuubi.engine.hive.memory=1g \ --conf kyuubi.engine.pool.name=kyuubi-engine-pool \ --conf kyuubi.engine.pool.selectPolicy=RANDOM \ --conf kyuubi.engine.pool.size=-1 \ --conf kyuubi.engine.session.initialize.sql= \ --conf kyuubi.engine.share.level=USER \ --conf kyuubi.engine.spark.event.loggers=SPARK \ --conf kyuubi.engine.submit.time=1721366841647 \ --conf kyuubi.engine.submit.timeout=PT30S \ --conf kyuubi.engine.type=HIVE_SQL \ --conf kyuubi.engine.ui.retainedSessions=200 \ --conf kyuubi.engine.ui.retainedStatements=200 \ --conf kyuubi.engine.ui.stop.enabled=true \ --conf kyuubi.engine.yarn.cores=1 \ --conf kyuubi.engine.yarn.java.options= \ --conf kyuubi.engine.yarn.memory=1024 \ --conf kyuubi.engine.yarn.queue=default \ --conf kyuubi.engine.yarn.submit.timeout=PT1M \ --conf kyuubi.event.async.pool.keepalive.time=PT1M \ --conf kyuubi.event.async.pool.size=8 \ --conf kyuubi.event.async.pool.wait.queue.size=100 \ --conf kyuubi.frontend.connection.url.use.hostname=true \ --conf kyuubi.frontend.max.message.size=104857600 \ --conf kyuubi.frontend.max.worker.threads=999 \ --conf kyuubi.frontend.min.worker.threads=9 \ --conf kyuubi.frontend.protocols=THRIFT_BINARY,REST \ --conf kyuubi.frontend.proxy.http.client.ip.header=X-Real-IP \ --conf kyuubi.frontend.rest.jetty.stopTimeout=PT10S \ --conf kyuubi.frontend.rest.max.worker.threads=999 \ --conf kyuubi.frontend.thrift.binary.ssl.disallowed.protocols=SSLv2,SSLv3 \ --conf kyuubi.frontend.thrift.binary.ssl.enabled=false \ --conf kyuubi.frontend.thrift.max.message.size=104857600 \ --conf kyuubi.frontend.thrift.max.worker.threads=999 \ --conf kyuubi.frontend.thrift.min.worker.threads=9 \ --conf kyuubi.frontend.thrift.worker.keepalive.time=PT1M \ --conf kyuubi.ha.addresses=hadoop01:2181,hadoop02:2181,hadoop03:2181 \ --conf kyuubi.ha.client.class=org.apache.kyuubi.ha.client.zookeeper.ZookeeperDiscoveryClient \ --conf kyuubi.ha.engine.ref.id=b1313aae-c609-4698-bc70-6581168961f8 \ --conf kyuubi.ha.namespace=/kyuubi_1.9.1_USER_HIVE_SQL/root1/default \ --conf kyuubi.ha.zookeeper.acl.enabled=false \ --conf kyuubi.ha.zookeeper.auth.type=NONE \ --conf kyuubi.ha.zookeeper.connection.base.retry.wait=1000 \ --conf kyuubi.ha.zookeeper.connection.max.retries=3 \ --conf kyuubi.ha.zookeeper.connection.max.retry.wait=30000 \ --conf kyuubi.ha.zookeeper.connection.retry.policy=EXPONENTIAL_BACKOFF \ --conf kyuubi.ha.zookeeper.connection.timeout=15000 \ --conf kyuubi.ha.zookeeper.engine.auth.type=NONE \ --conf kyuubi.ha.zookeeper.node.creation.timeout=PT2M \ --conf kyuubi.ha.zookeeper.session.timeout=60000 \ --conf kyuubi.metadata.cleaner.enabled=true \ --conf kyuubi.metadata.cleaner.interval=PT30M \ --conf kyuubi.metadata.max.age=PT128H \ --conf kyuubi.metadata.recovery.threads=10 \ --conf kyuubi.metadata.request.async.retry.enabled=true \ --conf kyuubi.metadata.request.async.retry.queue.size=65536 \ --conf kyuubi.metadata.request.async.retry.threads=10 \ --conf kyuubi.metadata.request.retry.interval=PT5S \ --conf kyuubi.metadata.store.class=org.apache.kyuubi.server.metadata.jdbc.JDBCMetadataStore \ --conf kyuubi.metrics.console.interval=PT20S \ --conf kyuubi.metrics.enabled=false \ --conf kyuubi.metrics.reporters= \ --conf kyuubi.operation.query.timeout=3600000 \ --conf kyuubi.operation.scheduler.pool=fair \ --conf kyuubi.server.info.provider=ENGINE \ --conf kyuubi.server.ipAddress=192.168.1.110 \ --conf kyuubi.session.check.interval=PT5M \ --conf kyuubi.session.close.on.disconnect=true \ --conf kyuubi.session.connection.url=hadoop01:10009 \ --conf kyuubi.session.engine.alive.timeout=PT2M \ --conf kyuubi.session.engine.check.interval=PT1M \ --conf kyuubi.session.engine.idle.timeout=PT30M \ --conf kyuubi.session.engine.initialize.timeout=PT5M \ --conf kyuubi.session.engine.launch.async=true \ --conf kyuubi.session.engine.log.timeout=PT24H \ --conf kyuubi.session.idle.timeout=PT6H \ --conf kyuubi.session.real.user=root1 \ --conf spark.cleaner.periodicGC.interval=5min \ --conf spark.driver.cores=1 \ --conf spark.driver.maxResultSize=1g \ --conf spark.dynamicAllocation.cachedExecutorIdleTimeout=30min \ --conf spark.dynamicAllocation.enabled=true \ --conf spark.dynamicAllocation.executorAllocationRatio=0.5 \ --conf spark.dynamicAllocation.executorIdleTimeout=60s \ --conf spark.dynamicAllocation.initialExecutors=2 \ --conf spark.dynamicAllocation.maxExecutors=25 \ --conf spark.dynamicAllocation.minExecutors=2 \ --conf spark.dynamicAllocation.schedulerBacklogTimeout=1s \ --conf spark.dynamicAllocation.shuffleTracking.enabled=false \ --conf spark.dynamicAllocation.shuffleTracking.timeout=30min \ --conf spark.dynamicAllocation.sustainedSchedulerBacklogTimeout=1s \ --conf spark.hadoop.cacheConf=false \ --conf spark.io.compression.lz4.blockSize=128kb \ --conf spark.master=yarn \ --conf spark.scheduler.allocation.file=hdfs:///user/spark/conf/kyuubi-fairscheduler.xml \ --conf spark.scheduler.mode=FAIR \ --conf spark.shuffle.file.buffer=1m \ --conf spark.shuffle.io.backLog=8192 \ --conf spark.shuffle.push.enabled=true \ --conf spark.shuffle.service.enabled=true \ --conf spark.shuffle.service.index.cache.size=100m \ --conf spark.shuffle.service.port=17337 \ --conf spark.shuffle.service.removeShuffle=false \ --conf spark.sql.adaptive.advisoryPartitionSizeInBytes=128M \ --conf spark.sql.adaptive.autoBroadcastJoinThreshold=10MB \ --conf spark.sql.adaptive.coalescePartitions.enabled=true \ --conf spark.sql.adaptive.coalescePartitions.initialPartitionNum=8192 \ --conf spark.sql.adaptive.coalescePartitions.minPartitionSize=1MB \ --conf spark.sql.adaptive.coalescePartitions.parallelismFirst=true \ --conf spark.sql.adaptive.enabled=true \ --conf spark.sql.adaptive.forceOptimizeSkewedJoin=false \ --conf spark.sql.adaptive.localShuffleReader.enabled=true \ --conf spark.sql.adaptive.optimizeSkewsInRebalancePartitions.enabled=true \ --conf spark.sql.adaptive.rebalancePartitionsSmallPartitionFactor=0.2 \ --conf spark.sql.adaptive.skewJoin.enabled=true \ --conf spark.sql.adaptive.skewJoin.skewedPartitionFactor=5 \ --conf spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes=256MB \ --conf spark.sql.autoBroadcastJoinThreshold=10MB \ --conf spark.sql.hive.convertMetastoreOrc=true \ --conf spark.sql.hive.metastore.jars=/usr/lib/hive/lib/ \ --conf spark.sql.hive.metastore.version=3.1.3 \ --conf spark.sql.orc.filterPushdown=true \ --conf spark.sql.statistics.fallBackToHdfs=true \ --conf spark.submit.deployMode=client
kyuubi.engine.hive.deploy.mode=yarn it works well
./beeline -u "jdbc:hive2://hadoop01:2181,hadoop02:2181,hadoop03:2181/default;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=kyuubi" -n root1 --hiveconf kyuubi.engine.type=HIVE_SQL --hiveconf kyuubi.engine.hive.deploy.mode=yarn
Affects Version(s)
1.9.1
Kyuubi Server Log Output
Kyuubi Engine Log Output
Kyuubi Server Configurations
Kyuubi Engine Configurations
Additional context
./beeline -u "jdbc:hive2://hadoop01:2181,hadoop02:2181,hadoop03:2181/default;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=kyuubi" -n root --hiveconf kyuubi.engine.type=FLINK_SQL
flink engine also have same problem
Are you willing to submit PR?