Open rootsongjc opened 7 years ago
This repo shows how to use HDFS as an ephemeral resource. Essentially just a scratch disk for jobs like spark and yarn to use. It wasn't intended for use as a permanent datastore that can be accessed from outside of the K8S cluster.
Are you trying to put data into it from within the cluster (from zeppelin) or from outside the cluster?
Yes, I know this HDFS cluster only for experimental.
I just use the command hadoop fs -put data.txt /
in a datanode container.
After I started HDFS cluster with kubernetes, the DataNode using local machine's IP address to register to NameNode so that NameNode can not communicate with DataNode, actually this HDFS cluster is not available, because I can't put data into it!
For example
Pod IP: 172.30.10.22 Node IP: 172.20.0.115 DataNode register to NameNode with IP and port 172.20.0.115:50010 In fact the real address of DataNode is 172.30.10.22
How to fix this problem?