Closed cloud-66 closed 1 year ago
I installed multinode cluster with 3 master and 3 worker node.
How did you do this?
My steps to create cluster (3 master\worker node)
create certificates and configs /home/user/usernetes/common/cfssl.sh --dir=/home/user/.config/usernetes --master=load-balancer --node=master1,10.5.35.17 --node=master2,10.5.35.18 --node=master3,10.5.35.19 --node=worker1,10.5.35.21 --node=worker2,10.5.35.22 --node=worker3,10.5.35.23
Copy generated certs and config to every node (folder /home/user/.config/usernetes)
rename folder nodes.nodename to node on every node
master1 ./install.sh --wait-init-certs --start=u7s.target --cni=flannel --cri=containerd --cidr=10.0.101.0/24 \ --publish=0.0.0.0:2379:2379/tcp --publish=0.0.0.0:2380:2380/tcp --publish=0.0.0.0:6443:6443/tcp \ --publish=0.0.0.0:10250:10250/tcp --publish=0.0.0.0:8472:8472/udp
master2 ./install.sh --wait-init-certs --start=u7s.target --cni=flannel --cri=containerd --cidr=10.0.102.0/24 \ --publish=0.0.0.0:2379:2379/tcp --publish=0.0.0.0:2380:2380/tcp --publish=0.0.0.0:6443:6443/tcp \ --publish=0.0.0.0:10250:10250/tcp --publish=0.0.0.0:8472:8472/udp
master3 ./install.sh --wait-init-certs --start=u7s.target --cni=flannel --cri=containerd --cidr=10.0.103.0/24 \ --publish=0.0.0.0:2379:2379/tcp --publish=0.0.0.0:2380:2380/tcp --publish=0.0.0.0:6443:6443/tcp \ --publish=0.0.0.0:10250:10250/tcp --publish=0.0.0.0:8472:8472/udp
I found out that i didn't use--cni=flannel options, and by default it uses bridge network. But when i use --cni=flannel i have error starting pod with flannel network
plugin type="flannel" failed (add): open /run/flannel/subnet.env: no such file or directory
i found solution (but didn't try it) manually create /run/flannel/subnet.env with options
FLANNEL_NETWORK=10.244.0.0/16 FLANNEL_SUBNET=10.244.0.1/24 FLANNEL_MTU=1450 FLANNEL_IPMASQ=true
https://github.com/kubernetes/kubernetes/issues/70202
how this config should be created with this installation .
The problem was that flannel hadn't access to etcd
hello, cloud-66, thank you for sharing your problem. and i am confused on how you start the node service on the worker nodes? in my opinion, only [kube-proxy], [flannel], [fuse-overlay] and [kubelet] services should started on the WORKER node.
From your description, is seem only the services on master nodes are configured.
Thank you in advance, and looking for you reply.
i have stuied the install.sh deeply.
i think i can also install the whole service on worker node, but just start 【u7s-node.target】 on it?
I installed multinode cluster with 3 master and 3 worker node. some pods have the same IP. How to solve this ?