liangguohun / HadoopSpark

Haddoop and Spark config
Apache License 2.0
0 stars 0 forks source link

创建multi node #6

Open liangguohun opened 7 years ago

liangguohun commented 7 years ago

1、复制单节点虚拟机重置网络 sudo gedit /etc/network/interfaces 修改网卡地址

NAT interface

auto eth0 iface eth0 inet dhcp

hot only interface

auto eth1 iface eth1 inet static address 192.168.56.101 netmask 255.255.255.0 network 192.168.56.0 broadcast 192.168.56.255

设置主机名 sudo gedit /etc/hostname 输入data1(其他节点也是这个流程) 修改host sudo gedit /etc/hosts 192.168.56.100 master 192.168.56.101 data1 192.168.56.102 data2

修改core-site.xml sudo gedit /usr/local/hadoop/etc/hadoop/core-site.xml 将localhost改为master 有域名的自己替换啊 修改yarn-site.xml sudo gedit /usr/local/hadoop/etc/hadoop/yarn-site.xml

<property>
    <name>yarn.resourcemanager.resource-tracker.address</name>
    <value>master:8025</value>
</property>
    <!--applicationManger通过此向ResourceManger 申请资源,释放资源等-->
<property>
    <name>yarn.resourcemanager.scheduler.address</name>
    <value>master:8030</value>
</property>
    <!--客户端通过此向ResourceManger 注册应用程序-->
<property>
    <name>yarn.resourcemanager.address</name>
    <value>master:8050</value>
</property>

修改mapred-site.xml sudo gedit /usr/local/hadoop/etc/hadoop/mapred-site.xml 改为

mapred.job.tracker master:54311

修改hdfs-site.xml sudo gedit /usr/local/hadoop/etc/hadoop/hdfs-site.xml 去掉下面属性因为是做node的

dfs.namenode.name.dir file:/usr/local/hadoop/hadoop_data/hdfs/namenode
liangguohun commented 7 years ago

master 修改 sudo gedit /usr/local/hadoop/etc/hadoop/hdfs-site.xml

dfs.namenode.name.dir file:/usr/local/hadoop/hadoop_data/hdfs/namenode

编辑masters文件 sudo gedit /usr/local/hadoop/etc/hadoop/masters 写入master 编辑slaves文件 sudo gedit /usr/local/hadoop/etc/hadoop/slaves 写入 data1 data2

liangguohun commented 7 years ago

参照之前配置ssh 删除各节点的hdfs目录 sudo rm -rf /usr/local/hadoop/hadoop_data/hdfs datanode目录创建 mkdir -p /usr/local/hadoop/hadoop_data/hdfs/datanode master创建namenode mkdir -p /usr/local/hadoop/hadoop_data/hdfs/namenode 各台服务器将目录所有者改为hduser sudo chown -R hduser:hduser /usr/local/hadoop master格式化namenode hdfs hadoop namenode -fomrat 启动 start-all.sh