hnbcao / kubeadm-ha

5 stars 3 forks source link

kubernetes v1.14.0高可用master集群部署(使用kubeadm,离线安装,最新支持kubernetes v1.15.3)

集群方案:

Kubernetes集群搭建

Host Name Role IP
master1 master1 192.168.56.103
master2 master2 192.168.56.104
master3 master3 192.168.56.105
node1 node1 192.168.56.106
node2 node2 192.168.56.107
node3 node3 192.168.56.108

1、离线安装包准备(基于能够访问外网的服务器下载相应安装包)

# 设置yum缓存路径,cachedir 缓存路径 keepcache=1保持安装包在软件安装之后不删除
cat /etc/yum.conf  
[main]
cachedir=/home/yum
keepcache=1
...

# 安装ifconfig
yum install net-tools -y

# 时间同步
yum install -y ntpdate

# 安装docker(建议18.06.3.ce)
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum makecache fast
## 列出Docker版本
yum list docker-ce --showduplicates | sort -r
## 安装指定版本
sudo yum install docker-ce-<VERSION_STRING>
eg:sudo yum install docker-ce-18.06.3.ce

# 安装文件管理器,XShell可通过rz sz命令上传或者下载服务器文件
yum install lrzsz -y

# 安装keepalived、haproxy
yum install -y socat keepalived ipvsadm haproxy

# 安装kubernetes相关组件
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
        http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

# 建议指定各个软件的版本号,使用yum list 软件名(如kubelet) --showduplicates | sort -r列出版本号。
yum install -y kubelet kubeadm kubectl ebtables

# 其他软件安装
yum install wget
...

# 拷贝离线包到集群节点
# 安装
# rpm -ivh *.rpm --force --nodeps
rpm -ivh ./base/packages/*.rpm --nodeps --force
rpm -ivh ./docker-ce-stable/packages/*.rpm --nodeps --force
rpm -ivh ./extras/packages/*.rpm --nodeps --force
rpm -ivh ./kubernetes/packages/*.rpm --nodeps --force
rpm -ivh ./updates/packages/*.rpm --nodeps --force

2、节点系统配置

systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
swapoff -a
yes | cp /etc/fstab /etc/fstab_bak
cat /etc/fstab_bak |grep -v swap > /etc/fstab
echo """
vm.swappiness = 0
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
""" > /etc/sysctl.conf
sysctl -p

centos7添加bridge-nf-call-ip6tables出现No such file or directory,简单来说就是执行一下 modprobe br_netfilter

centos7 升级内核

参考文章

grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg
grubby --default-kernel
grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"
uname -a
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in \${ipvs_modules}; do
 /sbin/modinfo -F filename \${kernel_module} > /dev/null 2>&1
 if [ $? -eq 0 ]; then
 /sbin/modprobe \${kernel_module}
 fi
done
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs

执行sysctl -p报错可执行modprobe br_netfilter,请参考centos7添加bridge-nf-call-ip6tables出现No such file or directory

systemctl enable keepalived systemctl enable haproxy


* 设置免密登录

1、三次回车后,密钥生成完成

ssh-keygen

2、拷贝密钥到其他节点

ssh-copy-id -i ~/.ssh/id_rsa.pub 用户名字@192.168.x.xxx


**、 Kubernetes要求集群中所有机器具有不同的Mac地址、产品uuid、Hostname。

3、keepalived+haproxy配置

cd ~/

创建集群信息文件

echo """ CP0_IP=192.168.56.103 CP1_IP=192.168.56.103 CP2_IP=192.168.56.104 VIP=192.168.56.102 NET_IF=eth0 CIDR=10.244.0.0/16 """ > ./cluster-info bash -c "$(curl -fsSL https://raw.githubusercontent.com/hnbcao/kubeadm-ha-master/v1.14.0/keepalived-haproxy.sh)"


4、部署HA Master

HA Master的部署过程已经自动化,请在master-1上执行如下命令,并注意修改IP;

脚本主要执行三步:

1)、重置kubelet设置

kubeadm reset -f rm -rf /etc/kubernetes/pki/


2)、编写节点配置文件并初始化master1的kubelet

echo """ apiVersion: kubeadm.k8s.io/v1beta1 kind: ClusterConfiguration kubernetesVersion: v1.14.0 controlPlaneEndpoint: "${VIP}:8443" maxPods: 100 networkPlugin: cni imageRepository: registry.aliyuncs.com/google_containers apiServer: certSANs:

3)、拷贝相关证书到master2、master3

for index in 1 2; do
  ip=${IPS[${index}]}
  ssh $ip "mkdir -p /etc/kubernetes/pki/etcd; mkdir -p ~/.kube/"
  scp /etc/kubernetes/pki/ca.crt $ip:/etc/kubernetes/pki/ca.crt
  scp /etc/kubernetes/pki/ca.key $ip:/etc/kubernetes/pki/ca.key
  scp /etc/kubernetes/pki/sa.key $ip:/etc/kubernetes/pki/sa.key
  scp /etc/kubernetes/pki/sa.pub $ip:/etc/kubernetes/pki/sa.pub
  scp /etc/kubernetes/pki/front-proxy-ca.crt $ip:/etc/kubernetes/pki/front-proxy-ca.crt
  scp /etc/kubernetes/pki/front-proxy-ca.key $ip:/etc/kubernetes/pki/front-proxy-ca.key
  scp /etc/kubernetes/pki/etcd/ca.crt $ip:/etc/kubernetes/pki/etcd/ca.crt
  scp /etc/kubernetes/pki/etcd/ca.key $ip:/etc/kubernetes/pki/etcd/ca.key
  scp /etc/kubernetes/admin.conf $ip:/etc/kubernetes/admin.conf
  scp /etc/kubernetes/admin.conf $ip:~/.kube/config

  ssh ${ip} "${JOIN_CMD} --experimental-control-plane"
done

4)、master2、master3加入节点

JOIN_CMD=`kubeadm token create --print-join-command`
ssh ${ip} "${JOIN_CMD} --experimental-control-plane"

完整脚本:

# 部署HA master

bash -c "$(curl -fsSL https://raw.githubusercontent.com/hnbcao/kubeadm-ha-master/v1.14.0/kube-ha.sh)"

5、加入节点(这是个错误的操作,并不需要在node部署keepalived+haproxy,如果node节点无法ping通虚拟IP(VIP),其原因是当前环境无法实现vip,具体原因由于能力有限,只能麻烦自己找找咯,方便分享的话不胜感激。)

注意两个配置中的${MASTER1 IP}, ${MASTER2 IP}, ${MASTER3 _ IP}、${VIP}需要替换为自己集群相应的IP地址

此时集群还需要安装网络组件,我选择了calico。具体安装方式可访问calico官网,或者运行本仓库里面addons/calico下的配置。注意替换里面的镜像和Deployment里面的环境变量CALICO_IPV4POOL_CIDR为/etc/kubernetes/kubeadm-config.yaml里面networking.podSubnet的值。

文章只是在文章kubeadm HA master(v1.13.0)离线包 + 自动化脚本 + 常用插件 For Centos/Fedora的基础上,修改了master的HA方案。关于集群安装的详细步骤,建议访问kubeadm HA master(v1.13.0)离线包 + 自动化脚本 + 常用插件 For Centos/Fedora