pokusio / k3s-topgun

The best I can do with k3s
0 stars 0 forks source link

creatin cluste k3d version 4.4.1 #10

Open Jean-Baptiste-Lasselle opened 3 years ago

Jean-Baptiste-Lasselle commented 3 years ago
docker network create jbl_network -d bridge
# succesfully created cluster, but failed starting load balancer and third agent : 
k3d cluster create jblCluster --agents 3 --servers 3 --network jbl_network  -p 8080:80@agent[0] -p 8081:80@agent[1] -p 8090:8090@server[0]  -p 8091:8090@server[1] --api-port 0.0.0.0:7888
# k3d cluster create jblCluster --agents 3 --servers 4 --network jbl_network  -p 8080:80@agent[0] -p 8081:80@agent[1] -p 8090:8090@server[0]  -p 8091:8090@server[1] --api-port 0.0.0.0:7888
# failed at starting all servers when nb of servers is 5, 6, or more up to 9, with 3 agents, and failed at creating cluster at all
k3d cluster create jblCluster --agents 3 --servers 9 --network jbl_network  -p 8080:80@agent[0] -p 8081:80@agent[1] -p 8090:8090@server[0]  -p 8091:8090@server[1] --api-port 0.0.0.0:7888

# to get the KUBECONFIG

kubectl config use-context k3d-jblCluster
bash-3.2$ kubectl cluster-info
Kubernetes master is running at https://0.0.0.0:7888
CoreDNS is running at https://0.0.0.0:7888/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://0.0.0.0:7888/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

what are agents and servers for k3s ? https://blog.alexellis.io/bare-metal-kubernetes-with-k3s/

Jean-Baptiste-Lasselle commented 3 years ago

curl -ivvv https://0.0.0.0:7888/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy --insecure

Jean-Baptiste-Lasselle commented 3 years ago

bash-3.2$ curl -ivvv https://0.0.0.0:7888/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy --insecure
*   Trying 0.0.0.0...
* TCP_NODELAY set
* Connected to 0.0.0.0 (127.0.0.1) port 7888 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/cert.pem
  CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Request CERT (13):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Certificate (11):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-ECDSA-AES256-GCM-SHA384
* ALPN, server did not agree to a protocol
* Server certificate:
*  subject: O=k3s; CN=k3s
*  start date: Apr 17 23:59:41 2021 GMT
*  expire date: Apr 18 00:00:19 2022 GMT
*  issuer: CN=k3s-server-ca@1618703981
*  SSL certificate verify result: unable to get local issuer certificate (20), continuing anyway.
> GET /api/v1/namespaces/kube-system/services/kube-dns:dns/proxy HTTP/1.1
> Host: 0.0.0.0:7888
> User-Agent: curl/7.64.1
> Accept: */*
> 
< HTTP/1.1 401 Unauthorized
< Cache-Control: no-cache, private
< Content-Type: application/json
< Date: Sun, 18 Apr 2021 00:25:19 GMT
< Content-Length: 165
< 
{ [165 bytes data]
100   165  100   165    0     0   4852      0 --:--:-- --:--:-- --:--:--  4852
* Connection #0 to host 0.0.0.0 left intact
* Closing connection 0
HTTP/1.1 401 Unauthorized
Cache-Control: no-cache, private
Content-Type: application/json
Date: Sun, 18 Apr 2021 00:25:19 GMT
Content-Length: 165

{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {

  },
  "status": "Failure",
  "message": "Unauthorized",
  "reason": "Unauthorized",
  "code": 401
}
Jean-Baptiste-Lasselle commented 3 years ago

bash-3.2$ kubectl apply -f ingress-rapide-nginx-k3d.yaml
ingress.networking.k8s.io/nginx created
bash-3.2$ kubectl get all

NAME                         READY   STATUS    RESTARTS   AGE
pod/nginx-6799fc88d8-r7gg4   1/1     Running   0          5m8s

NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.43.0.1      <none>        443/TCP   8m22s
service/nginx        ClusterIP   10.43.112.75   <none>        80/TCP    4m55s

NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx   1/1     1            1           5m8s

NAME                               DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-6799fc88d8   1         1         1       5m8s
bash-3.2$ 
Jean-Baptiste-Lasselle commented 3 years ago

docker run busybox ping -c 1 docker.for.mac.localhost | awk 'FNR==2 {print $4}' | sed s'/.$//'

https://github.com/moby/moby/issues/22753#issuecomment-400663231

https://github.com/AlmirKadric-Published/docker-tuntap-osx

https://github.com/moby/moby/issues/22753

ok so on mac there are specificity for networking -> i'll nswitch to another machine

Jean-Baptiste-Lasselle commented 3 years ago

git clone https://github.com/AlmirKadric-Published/docker-tuntap-osx cd ./docker-tuntap-osx ./sbin/docker_tap_install.sh

now docker MUST be restarted

killall Docker && open /Applications/Docker.app

if docker was stopped, just start it again

open /Applications/Docker.app

once you made sure docker is started by running 'docker version', then execute this to bring up the tap interface :

./sbin/docker_tap_up.sh

then wait a little and the tap interface will have an IP address you can pping from mac os host, and you can use as gateway to containers


I killed and restarted the docker daemon , and then ran again the instrcutions to create the tap1 , wait til docker is restarted, and then ran the tap up interface script, and after a minute or two , i get an ip address for tap1  ( execute `ifconfig` ): 
```bash
tap1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
        ether 9e:71:03:4d:6c:1f 
        inet 10.0.75.1 netmask 0xfffffffc broadcast 10.0.75.3
        media: autoselect
        status: active
        open (pid 38576)

also at that point :

bash-3.2$ ping -c 4 10.0.75.1
PING 10.0.75.1 (10.0.75.1): 56 data bytes
64 bytes from 10.0.75.1: icmp_seq=0 ttl=64 time=0.083 ms
64 bytes from 10.0.75.1: icmp_seq=1 ttl=64 time=0.062 ms
64 bytes from 10.0.75.1: icmp_seq=2 ttl=64 time=0.047 ms
64 bytes from 10.0.75.1: icmp_seq=3 ttl=64 time=0.046 ms

--- 10.0.75.1 ping statistics ---
4 packets transmitted, 4 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.046/0.059/0.083/0.015 ms

ok now I add the ip route to reach the containers IP adresses :

# route add -net 172.18.0.0/16 -netmask <IP MASK> 10.0.75.2
route add -net 172.18.0.0/16  10.0.75.2
bash-3.2$ kubectl get nodes -o wide
NAME                      STATUS   ROLES                       AGE    VERSION        INTERNAL-IP   EXTERNAL-IP   OS-IMAGE   KERNEL-VERSION      CONTAINER-RUNTIME
k3d-jblcluster-agent-0    Ready    <none>                      78s    v1.20.5+k3s1   172.18.0.5    <none>        Unknown    4.19.121-linuxkit   containerd://1.4.4-k3s1
k3d-jblcluster-agent-1    Ready    <none>                      69s    v1.20.5+k3s1   172.18.0.6    <none>        Unknown    4.19.121-linuxkit   containerd://1.4.4-k3s1
k3d-jblcluster-agent-2    Ready    <none>                      59s    v1.20.5+k3s1   172.18.0.7    <none>        Unknown    4.19.121-linuxkit   containerd://1.4.4-k3s1
k3d-jblcluster-server-0   Ready    control-plane,etcd,master   113s   v1.20.5+k3s1   172.18.0.2    <none>        Unknown    4.19.121-linuxkit   containerd://1.4.4-k3s1
k3d-jblcluster-server-1   Ready    control-plane,etcd,master   100s   v1.20.5+k3s1   172.18.0.3    <none>        Unknown    4.19.121-linuxkit   containerd://1.4.4-k3s1
k3d-jblcluster-server-2   Ready    control-plane,etcd,master   84s    v1.20.5+k3s1   172.18.0.4    <none>        Unknown    4.19.121-linuxkit   containerd://1.4.4-k3s1
bash-3.2$ route add -net 172.18.0.0/16  10.0.75.2
route: must be root to alter routing table
bash-3.2$ sudo route add -net 172.18.0.0/16  10.0.75.2
Password:
add net 172.18.0.0: gateway 10.0.75.2
bash-3.2$ ping -c 4 172.18.0.6
PING 172.18.0.6 (172.18.0.6): 56 data bytes
64 bytes from 172.18.0.6: icmp_seq=0 ttl=63 time=0.356 ms
64 bytes from 172.18.0.6: icmp_seq=1 ttl=63 time=0.183 ms
64 bytes from 172.18.0.6: icmp_seq=2 ttl=63 time=0.192 ms
64 bytes from 172.18.0.6: icmp_seq=3 ttl=63 time=0.166 ms

--- 172.18.0.6 ping statistics ---
4 packets transmitted, 4 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.166/0.224/0.356/0.077 ms
Jean-Baptiste-Lasselle commented 3 years ago

ok i now continue and try and provision metallb :

cat < ./metallb-config.yaml apiVersion: v1 kind: ConfigMap metadata: namespace: metallb-system name: config data: config: | address-pools:

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.6/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.6/manifests/metallb.yaml
# On first install only
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
kubectl apply -f ./metallb-config.yaml

kubectl get all -n metallb-system
NAMESPACE     NAME                         TYPE           CLUSTER-IP      EXTERNAL-IP                                                         PORT(S)                      AGE
default       service/kubernetes           ClusterIP      10.43.0.1       <none>                                                              443/TCP                      65m
kube-system   service/kube-dns             ClusterIP      10.43.0.10      <none>                                                              53/UDP,53/TCP,9153/TCP       65m
kube-system   service/metrics-server       ClusterIP      10.43.96.147    <none>                                                              443/TCP                      65m
kube-system   service/traefik              LoadBalancer   10.43.13.247    172.18.0.2,172.18.0.3,172.18.0.4,172.18.0.5,172.18.0.6,172.18.0.7   80:32315/TCP,443:32067/TCP   65m
kubectl create deployment nginx --image=nginx
kubectl expose deploy nginx --port 8087 --type LoadBalancer
bash-3.2$ kubectl get all
NAME                         READY   STATUS              RESTARTS   AGE
pod/nginx-6799fc88d8-clftp   1/1     Running             0          10m
pod/svclb-nginx-4v9dn        1/1     Running             0          32s
pod/svclb-nginx-cmlfl        1/1     Running             0          32s
pod/svclb-nginx-gfmdk        1/1     Running             0          32s
pod/svclb-nginx-ks2hn        1/1     Running             0          32s
pod/svclb-nginx-ph4rh        0/1     ContainerCreating   0          32s
pod/svclb-nginx-rk6pk        1/1     Running             0          32s

NAME                 TYPE           CLUSTER-IP      EXTERNAL-IP                                              PORT(S)          AGE
service/kubernetes   ClusterIP      10.43.0.1       <none>                                                   443/TCP          81m
service/nginx        LoadBalancer   10.43.185.243   172.18.0.3,172.18.0.4,172.18.0.5,172.18.0.6,172.18.0.7   8087:31241/TCP   33s

NAME                         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/svclb-nginx   6         6         1       6            1           <none>          33s

NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx   1/1     1            1           10m

NAME                               DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-6799fc88d8   1         1         1       10m
Jean-Baptiste-Lasselle commented 3 years ago

ok it seems like the networking established for mac os docker is pretty unstable i'll definitely have to switch to solid debian