Closed bhushit closed 9 years ago
Need to repeat this on workers
Did not have to repeat all steps on workers, workers just need a datanode and not namenode, hence
sudo yum install http://archive.cloudera.com/cdh4/one-click-install/redhat/6/x86_64/cloudera-cdh-4-0.x86_64.rpm
sudo yum install hadoop-hdfs-datanode
Create and add the following to the config file, /etc/hadoop/conf/core-site.xml
,
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://hcc-group8head.unl.edu:9000</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.block.size</name>
<value>16777216</value>
</property>
</configuration>
Start the datanode,
sudo service hadoop-hdfs-datanode start
All the nodes are visible as live nodes on the health check page
Install cloudera's packaging of hadoop,
Added the following config to
/etc/hadoop/conf/core-site.xml
,Start condor service
sudo service hadoop-hdfs-namenode start
Format namenode
hadoop namenode -format
This failed, so checked the logs and it said,
Tried changing permissions of the folder as suggested on internet, but it caused another exception,
Then changed the owner of the folder
/var/lib/hadoop-hdfs/cache/hdfs/dfs/name/current/
to hdfs,Restarted the hadoop service,
sudo service hadoop-hdfs-namenode restart
Found --No errors in logs--, YAYY
Format the namenode,
hadoop namenode -format
One node is up and working, can be tested using, http://hcc-group8head.unl.edu:50070/dfshealth.jsp
Start the datanode now,
sudo service hadoop-hdfs-datanode start