Closed AhmetFurkanDEMIR closed 1 year ago
It looks like it is using a hostname my-hdfs-namenode-0.my-hdfs-namenodes
. That doesn't appear to be the default configuration. Are you using docker compose or Kubernetes? Docker compose is the best supported setup at the moment.
It looks like it is using a hostname
my-hdfs-namenode-0.my-hdfs-namenodes
. That doesn't appear to be the default configuration. Are you using docker compose or Kubernetes? Docker compose is the best supported setup at the moment.
I'm using Kubeadm, I haven't made any changes to the configuration.
127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet fe00::0 ip6-mcastprefix fe00::1 ip6-allnodes fe00::2 ip6-allrouters 192.168.92.81 my-hdfs-namenode-0.my-hdfs-namenodes.kube-system.svc.cluster.local my-hdfs-namenode-0
There were some problems with breaking changes in Kubernetes which has caused some issues (#255, #320). We were using version 1.24 of Kubernetes and this needs upgrading. I suspect the issue you're having relates to the K8s config. We'll need to investigate this further.
I'd recommend using Docker Compose, or a fuller deployment of Gaffer - depending on your goals.
There were some problems with breaking changes in Kubernetes which has caused some issues (#255, #320). We were using version 1.24 of Kubernetes and this needs upgrading. I suspect the issue you're having relates to the K8s config. We'll need to investigate this further.
I'd recommend using Docker Compose, or a fuller deployment of Gaffer - depending on your goals.
The document I used for Kubeadm installation: https://k8s-school.fr/resources/en/blog/kubeadm/
The method I use for HDFS installation:
helm repo add gaffer https://gchq.github.io/gaffer-docker helm install my-hdfs gaffer/hdfs --version 2.0.0
Hello, I was able to solve the problem. When I defined an external ip in the yaml file, it gave an error. When I deleted the ip and did port forwarding, the error was fixed.
Glad you got it working, thanks for getting back to us.
2023-09-26 10:31:45,498 ERROR datanode.DataNode: Initialization failed for Block pool BP-764979582-192.168.92.69-1695299543106 (Datanode Uuid edba61e9-518b-4c28-a4ac-c06d7b99cacf) service to my-hdfs-namenode-0.my-hdfs-namenodes/192.168.92.81:8021 Datanode denied communication with namenode because hostname cannot be resolved (ip=192.168.92.65, hostname=192.168.92.65): DatanodeRegistration(0.0.0.0:9866, datanodeUuid=edba61e9-518b-4c28-a4ac-c06d7b99cacf, infoPort=9864, infoSecurePort=0, ipcPort=9867, storageInfo=lv=-57;cid=CID-f91cbe67-98f9-4d00-9d36-0d900bf26a59;nsid=243305407;c=1695299543106) at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:1147)
I encountered such an error and could not solve it, can you help?