Open subbaraodna opened 2 years ago
Yes, hostnames are possible...
What value of CORE_CONF_fs_defaultFS
are you using and can you otherwise start some other container in the Swarm network and simply ping namenode
?
Thanks for your reply. I am facing yhis issue while I am running it in my client network and it is working fine in my personal gcp account.
However am not sure how to nail down this networking problem.
On Tue, 14 Jun 2022, 02:31 Jordan Moore, @.***> wrote:
What value of CORE_CONF_fs_defaultFS are you using?
— Reply to this email directly, view it on GitHub https://github.com/big-data-europe/docker-hadoop/issues/128#issuecomment-1154245366, or unsubscribe https://github.com/notifications/unsubscribe-auth/AZLO4KTWLFEXWE57XQ2H2KLVO546PANCNFSM5XDB6BTA . You are receiving this because you authored the thread.Message ID: @.***>
When using Swarm and container service names, you'll need to verify your overlay network and its DNS server are working
https://docs.docker.com/engine/swarm/services/#connect-the-service-to-an-overlay-network
In cluster mode deployment using docker stack(docker swarm) , data node and name are deployed, am getting error Problem connecting to server: namenode:9000 in data node. can host name node be used for hdfs uri? can you guide to resolve this issue? Any support will be appreciated.