Segence / docker-hadoop

A Docker container with a full Hadoop cluster setup with Spark and Zeppelin
63 stars 40 forks source link

No datanode found after start hadoop #6

Open blling opened 5 years ago

blling commented 5 years ago

Start as follow steps: docker-compose up -d -> docker exec -it hadoop-namenode bash -> service hadoop start -> open http://localhost:50070, but there is no datanode found.

image

Why?

Tips: Start Log

====== STARTING HDFS ======

Starting namenodes on [hadoop-namenode]
hadoop-namenode: Warning: Permanently added '[hadoop-namenode]:2222,[172.19.0.2]:2222' (ECDSA) to the list of known hosts.
hadoop-namenode: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hadoop-namenode-hadoop-namenode.out
hadoop-datanode1: Warning: Permanently added '[hadoop-datanode1]:2222,[172.19.0.3]:2222' (ECDSA) to the list of known hosts.
hadoop-datanode1: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hadoop-datanode-hadoop-datanode1.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: Warning: Permanently added '[0.0.0.0]:2222' (ECDSA) to the list of known hosts.
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hadoop-secondarynamenode-hadoop-namenode.out

====== STARTING YARN ======

starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hadoop-resourcemanager-hadoop-namenode.out
hadoop-datanode1: Warning: Permanently added '[hadoop-datanode1]:2222,[172.19.0.3]:2222' (ECDSA) to the list of known hosts.
hadoop-datanode1: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hadoop-nodemanager-hadoop-datanode1.out

====== STARTING MAPREDUCE HISTORY SERVER ======

starting historyserver, logging to /usr/local/hadoop/logs/mapred-hadoop-historyserver-hadoop-namenode.out

====== HDFS cluster overview ======

Configured Capacity: 0 (0 B)
Present Capacity: 0 (0 B)
DFS Remaining: 0 (0 B)
DFS Used: 0 (0 B)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0
Pending deletion blocks: 0

-------------------------------------------------
robvadai commented 4 years ago

Did you try to clean all HDFS directories and start from scratch? This will delete all your files on HDFS but you can try and execute it from your host machine (ie. Mac): ./clean-hdfs-directories.sh