dromara / CloudEon

CloudEon uses Kubernetes to install and deploy open-source big data components, enabling the containerized operation of an open-source big data platform. This allows you to reduce your focus on underlying resource management and maintenance.
https://www.cloudeon.top/
Apache License 2.0
419 stars 100 forks source link

【doc】Cloudeon HDFS如何添加datanode磁盘 #123

Closed stdnt-xiao closed 8 months ago

stdnt-xiao commented 9 months ago

进入datanode容器关闭datanode

#切换执行环境
/bin/bash
#加载环境变量
source /opt/edp/hdfs/conf/hadoop-hdfs-env.sh
#检查hdfs磁盘块
~/apache-hadoop/bin/hdfs fsck /
#关闭datanode服务
~/apache-hadoop/sbin/hadoop-daemon.sh --config /opt/edp/hdfs/conf stop datanode
#确认datanode服务关闭状态
jps

进入datanode所在服务器修改配置文件

# 进入服务器
ssh root@node001
vim /opt/edp/hdfs/conf/hdfs-site.xml

#旧:
<property>
    <name>dfs.datanode.data.dir</name>
    <value>/opt/edp/hdfs/data/datanode</value>
</property>
#新:
<property>
    <name>dfs.datanode.data.dir</name>
    <value>/opt/edp/hdfs/data/datanode1,/opt/edp/hdfs/data/datanode2</value>
</property>

拷贝原生数据目录及新增磁盘目录

#迁移旧目录
mv /opt/edp/hdfs/data/datanode /opt/edp/hdfs/data/datanode1
#创建新目录,并添加容器读写权限
mkdir /opt/edp/hdfs/data/datanode2
chown -R 1002:1002 /opt/edp/hdfs/data/datanode2

进入datanode容器启动datanode

#切换执行环境
/bin/bash
#加载环境变量
source /opt/edp/hdfs/conf/hadoop-hdfs-env.sh
#启动datanode服务
~/apache-hadoop/sbin/hadoop-daemon.sh --config /opt/edp/hdfs/conf start datanode
#确认datanode服务关闭状态
jps
#检查hdfs磁盘块
~/apache-hadoop/bin/hdfs fsck /

登录node节点查看磁盘详情

http://node001:50075/datanode.html

注意事项

stdnt-xiao commented 8 months ago

已解决