big-data-europe / docker-hive

1.02k stars 544 forks source link

WARN jdbc.HiveConnection: Failed to connect to localhost:10000 #24

Open mat-ale opened 5 years ago

mat-ale commented 5 years ago

Hi,

this is my configuration:

hive-server:
    container_name:           hive-server
    image:                    bde2020/hive:2.3.2-postgresql-metastore
    env_file:
          - ./hive_build/hadoop-hive.env
    environment:
        HIVE_CORE_CONF_javax_jdo_option_ConnectionURL: "jdbc:postgresql://hive-metastore/metastore"
        SERVICE_PRECONDITION: "hive-metastore:9083"
    ports:
        - "10000:10000"

hive-metastore:
    container_name:           hive-metastore
    image:                    bde2020/hive:2.3.2-postgresql-metastore
    env_file:
        - ./hive_build/hadoop-hive.env
    command:                  /opt/hive/bin/hive --service metastore
    environment:
        SERVICE_PRECONDITION: "hadoop-namenode:50070 hadoop-datanode1:50075 hive-metastore-postgresql:5432"
    ports:
        - "9083:9083"

hive-metastore-postgresql:
    container_name:           hive-metastore-postgresql
    image:                    bde2020/hive-metastore-postgresql:2.3.0
    ports:
        - "5433:5432"

and HIVE should be connected to 2 more HDFS containers (hadoop-namenode, hadoop-datanode1) that I have built and that are working just great on the right ports.

When I run:

 docker-compose exec hive-server bash
 /opt/hive/bin/beeline -u jdbc:hive2://localhost:10000

I get:

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hive/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop-2.7.4/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Connecting to jdbc:hive2://localhost:10000
19/04/30 12:21:53 [main]: WARN jdbc.HiveConnection: Failed to connect to localhost:10000
Could not open connection to the HS2 server. Please check the server URI and if the URI is correct, then ask the administrator to check the server status.
Error: Could not open client transport with JDBC Uri: jdbc:hive2://localhost:10000: java.net.ConnectException: Connection refused (Connection refused) (state=08S01,code=0)
Beeline version 2.3.2 by Apache Hive
beeline>

From docker-compose logs I don't see specific errors so the containers seem to work fine.

Any help on this please? Thanks

marcuslind90 commented 5 years ago

I'm experiencing similar issues, the README file is not explicit enough.

ntallapa12 commented 5 years ago

I am trying to connect from my local machine beeline> !connect jdbc:hive2://hive-server:10000

I get unknownhostexception, please let us know on how to make beeline or jdbc connection

ntallapa12 commented 5 years ago

networks: common-network: driver: overlay

adding this in docker-compose resolved the issue for me

muzammil-irshad commented 4 years ago

@mat-ale did you resolve your issue?

purbanow commented 4 years ago

I have same problem

dhirendra31pandit commented 3 years ago

I too have the same issue and not be able to fix it.

jessequinn commented 3 years ago

any resolution?

dhirendra31pandit commented 3 years ago

I have solved it by putting the ip address instead of localhost. As i had vm machine and your local desktop and vm localhost represent two machines. I hope this will solve your issues.

jessequinn commented 3 years ago

the following should resolve the issue: beeline -u jdbc:hive2:// literally. do not include any host or port and let beeline figure it out. This seems to work well (embedded mode); however, remote mode, even with configuring with nosasl does not appear to work.

jessequinn commented 3 years ago

OK now it is resolved:

The following needs to be done:

[startup.sh]

#!/bin/bash

hadoop fs -mkdir       /tmp
hadoop fs -mkdir -p    /user/hive/warehouse
hadoop fs -chmod g+w   /tmp
hadoop fs -chmod g+w   /user/hive/warehouse

cd $HIVE_HOME/bin
./hive --service hiveserver2 --hiveconf hive.server2.thrift.port=10000 --hiveconf hive.root.logger=INFO,console --hiveconf hive.server2.enable.doAs=false

you probably do not need --hiveconf hive.server2.thrift.port=10000 as i have also added to the hadoop-hive.env, but it doesnt hurt.

--hiveconf hive.root.logger=INFO,console gives me more details about problems; this is how i resolved this specific issue.

add to hadoop-hive.env and hadoop.env

CORE_CONF_hadoop_proxyuser_hive_hosts=*

and to hadoop-hive.env

HIVE_SITE_CONF_hive_server2_thrift_bind_host=0.0.0.0
HIVE_SITE_CONF_hive_server2_thrift_port=10000
HIVE_SITE_CONF_hive_metastore_event_db_notification_api_auth=false
rusonding commented 3 years ago

Logging initialized using configuration in file:/opt/hive/conf/hive-log4j2.properties Async: true Exception in thread "main" java.lang.IllegalArgumentException: java.net.UnknownHostException: namenode at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:378) at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:320) at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:176) at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:678) at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:619) at org.apache