Closed lordk911 closed 3 years ago
Hi @lordk911
Have you configured the env HADOOP_CONF_DIR
which can be defined in $KYUUBI_HOME/conf/kyuubi-env.sh
?
etry.RetryInvocationHandler: org.apache.hadoop.security.authorize.AuthorizationException: Unauthorized connection for super-user: bigtop from IP 10
Hi @lordk911
Have you configured the env
HADOOP_CONF_DIR
which can be defined in$KYUUBI_HOME/conf/kyuubi-env.sh
?
yes I've configured the env HADOOP_CONF_DIR
which can be defined in $KYUUBI_HOME/conf/kyuubi-env.sh
etry.RetryInvocationHandler: org.apache.hadoop.security.authorize.AuthorizationException: Unauthorized connection for super-user: bigtop from IP 10
yes I saw the AuthorizationException, but in HADOOP_CONF_DIR/core-site.xml have the property:
<property>
<name>hadoop.proxyuser.hive.hosts</name>
<value>host-10-0-105-243,host-10-0-105-244,host-10-0-105-246,host-10-0-105-248,host-10-0-105-250</value>
</property>
hadoop.proxyuser.hive.hosts
after config all proxyuser related property to * , kyuubi can submit spark application to yarn. but when I connect it with beeline ,I got error:
[bigtop@host-10-0-105-243 ~]$ beeline -n bigtop -u 'jdbc:hive2://host-10-0-105-243:10009/'
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/3.1.4.0-315/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/3.1.4.0-315/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Connecting to jdbc:hive2://host-10-0-105-243:10009/
Connected to: Spark SQL (version 1.2.0)
Driver: Hive JDBC (version 3.1.0.3.1.4.0-315)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Beeline version 3.1.0.3.1.4.0-315 by Apache Hive
0: jdbc:hive2://host-10-0-105-243:10009/> show databases;
Unexpected end of file when reading from HS2 server. The root cause might be too many concurrent connections. Please ask the administrator to check the number of active connections, and adjust hive.server2.thrift.max.worker.threads if applicable.
Error: org.apache.thrift.transport.TTransportException (state=08S01,code=0)
0: jdbc:hive2://host-10-0-105-243:10009/>
I'm using HDP3.1.4 with hive version is 3.1.0 , should I compile the source code aginst hive3?
I am not sure, but please try hive beeline < 2.3 first.
I am not sure, but please try hive beeline < 2.3 first.
Thank you, with using $SPARK_HOME/bin/beeline but the hive's beeline , I can query data from kyuubi
You are welcome
kyuubi-1.2.0-bin-without-spark Spark version 3.1.1
21/06/22 09:39:09 INFO ui.SparkUI: Bound SparkUI to 0.0.0.0, and started at http://host-10-0-105-243:43946 21/06/22 09:39:09 INFO spark.SparkContext: Added JAR file:/data/soft/kyuubi/kyuubi-1.2.0-bin-without-spark/externals/engines/spark/kyuubi-spark-sql-engine-1.2.0.jar at spar k://host-10-0-105-243:38443/jars/kyuubi-spark-sql-engine-1.2.0.jar with timestamp 1624325948895 21/06/22 09:39:10 INFO client.AHSProxy: Connecting to Application History server at host-10-0-105-244/10.0.105.244:10200 21/06/22 09:39:10 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to rm2 21/06/22 09:39:10 INFO retry.RetryInvocationHandler: org.apache.hadoop.security.authorize.AuthorizationException: Unauthorized connection for super-user: bigtop from IP 10. 0.105.243, while invoking ApplicationClientProtocolPBClientImpl.getClusterMetrics over rm2 after 1 failover attempts. Trying to failover after sleeping for 36516ms. 21/06/22 09:39:47 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to rm1 21/06/22 09:39:47 INFO retry.RetryInvocationHandler: java.net.ConnectException: Call From host-10-0-105-243/10.0.105.243 to host-10-0-105-244:8032 failed on connection exce ption: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused, while invoking ApplicationClientProtocolPBClie ntImpl.getClusterMetrics over rm1 after 2 failover attempts. Trying to failover after sleeping for 22316ms.
I don't know what's the matter? and I found the port 8032 is not configed in my cluster?