kelseyhightower / kubernetes-the-hard-way

Bootstrap Kubernetes the hard way. No scripts.
Apache License 2.0
40.01k stars 13.73k forks source link

Last curl on the smoke test last section fails #796

Closed MikHulk closed 2 weeks ago

MikHulk commented 2 weeks ago

I am stuck on the last section of the smoke test . I cannot get an answer from nginx with curl.

I don't know what I've done wrong .

I have checked several time the previous step. And all check are ok. Excepted I don't obtain a response on the last curl command:

root@jumpbox:~/kubernetes-the-hard-way# kubectl get pods -l app=nginx
NAME                     READY   STATUS    RESTARTS   AGE
nginx-56fcf95486-74f2j   1/1     Running   0          55m
root@jumpbox:~/kubernetes-the-hard-way# kubectl get svc nginx
NAME    TYPE       CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
nginx   NodePort   10.32.0.192   <none>        80:31005/TCP   50m
root@jumpbox:~/kubernetes-the-hard-way# kubectl expose deployment nginx --port 80 --type NodePort
Error from server (AlreadyExists): services "nginx" already exists
root@jumpbox:~/kubernetes-the-hard-way# kubectl get nodes
NAME     STATUS   ROLES    AGE   VERSION
node-0   Ready    <none>   65m   v1.28.3
node-1   Ready    <none>   61m   v1.28.3
root@jumpbox:~/kubernetes-the-hard-way# curl -m 30 -I http://192.168.6.2:${NODE_PORT}
curl: (28) Connection timed out after 30001 milliseconds
root@jumpbox:~/kubernetes-the-hard-way# curl -m 30 -I http://node-0:${NODE_PORT}
curl: (28) Connection timed out after 30000 milliseconds
root@jumpbox:~/kubernetes-the-hard-way# 

ping looks ok:

# ping node-0
PING node-0.kubernetes.local (192.168.6.2) 56(84) bytes of data.
64 bytes from node-0.kubernetes.local (192.168.6.2): icmp_seq=1 ttl=64 time=0.246 ms
64 bytes from node-0.kubernetes.local (192.168.6.2): icmp_seq=2 ttl=64 time=0.302 ms
^C
--- node-0.kubernetes.local ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1010ms
rtt min/avg/max/mdev = 0.246/0.274/0.302/0.028 ms

this issue mentions some configuration on the firewall but I didn't see such like that. Have I missed something ?

MikHulk commented 2 weeks ago

Worth mentioning I am trying on qemu-kvm x86_64 guests (I didn't realize this requirement the first time and was too lazy to reprovision the vm.) . Maybe my issue comes from my setup . But I don't think so. If it is, please let me know.

MikHulk commented 2 weeks ago

Apparently it works from the node itself

node-0:~$ curl -I http://node-0:31005
HTTP/1.1 200 OK
Server: nginx/1.27.0
Date: Thu, 11 Jul 2024 02:05:57 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Tue, 28 May 2024 13:22:30 GMT
Connection: keep-alive
ETag: "6655da96-267"
Accept-Ranges: bytes

But not from another host:

node-1:~$ curl -m 30 -I http://node-0:31005
curl: (28) Connection timed out after 30001 milliseconds
jumpbox:~$ curl -m 30 -I http://node-0:31005
curl: (28) Connection timed out after 30001 milliseconds

Everything looks properly configured however:

# kubectl describe node node-0
Name:               node-0
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=node-0
                    kubernetes.io/os=linux
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Wed, 10 Jul 2024 03:08:04 +0200
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  node-0
  AcquireTime:     <unset>
  RenewTime:       Thu, 11 Jul 2024 04:19:20 +0200
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Thu, 11 Jul 2024 04:17:26 +0200   Wed, 10 Jul 2024 03:08:04 +0200   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Thu, 11 Jul 2024 04:17:26 +0200   Wed, 10 Jul 2024 03:08:04 +0200   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Thu, 11 Jul 2024 04:17:26 +0200   Wed, 10 Jul 2024 03:08:04 +0200   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Thu, 11 Jul 2024 04:17:26 +0200   Wed, 10 Jul 2024 03:08:05 +0200   KubeletReady                 kubelet is posting ready status. AppArmor enabled
Addresses:
  InternalIP:  192.168.6.2
  Hostname:    node-0
Capacity:
  cpu:                1
  ephemeral-storage:  19480400Ki
  hugepages-2Mi:      0
  memory:             2014444Ki
  pods:               110
Allocatable:
  cpu:                1
  ephemeral-storage:  17953136611
  hugepages-2Mi:      0
  memory:             1912044Ki
  pods:               110
System Info:
  Machine ID:                 447491ce3c1b4a3ca6fdcc8eeeb73aec
  System UUID:                447491ce3c1b4a3ca6fdcc8eeeb73aec
  Boot ID:                    fd042c0a-74ab-4776-b131-bb06e378cd7d
  Kernel Version:             6.1.0-22-amd64
  OS Image:                   Debian GNU/Linux 12 (bookworm)
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  containerd://1.7.8
  Kubelet Version:            v1.28.3
  Kube-Proxy Version:         v1.28.3
Non-terminated Pods:          (0 in total)
  Namespace                   Name    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----    ------------  ----------  ---------------  -------------  ---
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests  Limits
  --------           --------  ------
  cpu                0 (0%)    0 (0%)
  memory             0 (0%)    0 (0%)
  ephemeral-storage  0 (0%)    0 (0%)
  hugepages-2Mi      0 (0%)    0 (0%)
Events:              <none>
# kubectl describe deployment nginx
Name:                   nginx
Namespace:              default
CreationTimestamp:      Wed, 10 Jul 2024 03:16:10 +0200
Labels:                 app=nginx
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               app=nginx
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=nginx
  Containers:
   nginx:
    Image:        nginx:latest
    Port:         <none>
    Host Port:    <none>
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   nginx-56fcf95486 (1/1 replicas created)
Events:          <none>
# kubectl describe svc nginx
Name:                     nginx
Namespace:                default
Labels:                   app=nginx
Annotations:              <none>
Selector:                 app=nginx
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.32.0.192
IPs:                      10.32.0.192
Port:                     <unset>  80/TCP
TargetPort:               80/TCP
NodePort:                 <unset>  31005/TCP
Endpoints:                10.200.1.2:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

ip tables from node-0 looks good (but I am not sure)

node-0:~$ sudo iptables-save | grep nginx
-A KUBE-EXT-2CMXP7HKUVJN7L6M -m comment --comment "masquerade traffic for default/nginx external destinations" -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/nginx" -m tcp --dport 31005 -j KUBE-EXT-2CMXP7HKUVJN7L6M
-A KUBE-SEP-ZGRMGTC2RMJMJM3K -s 10.200.1.2/32 -m comment --comment "default/nginx" -j KUBE-MARK-MASQ
-A KUBE-SEP-ZGRMGTC2RMJMJM3K -p tcp -m comment --comment "default/nginx" -m tcp -j DNAT --to-destination 10.200.1.2:80
-A KUBE-SERVICES -d 10.32.0.192/32 -p tcp -m comment --comment "default/nginx cluster IP" -m tcp --dport 80 -j KUBE-SVC-2CMXP7HKUVJN7L6M
-A KUBE-SVC-2CMXP7HKUVJN7L6M ! -s 10.200.0.0/16 -d 10.32.0.192/32 -p tcp -m comment --comment "default/nginx cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
-A KUBE-SVC-2CMXP7HKUVJN7L6M -m comment --comment "default/nginx -> 10.200.1.2:80" -j KUBE-SEP-ZGRMGTC2RMJMJM3K

node 1 too:

node-1:~$ sudo iptables-save | grep nginx
-A KUBE-EXT-2CMXP7HKUVJN7L6M -m comment --comment "masquerade traffic for default/nginx external destinations" -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/nginx" -m tcp --dport 31005 -j KUBE-EXT-2CMXP7HKUVJN7L6M
-A KUBE-SEP-ZGRMGTC2RMJMJM3K -s 10.200.1.2/32 -m comment --comment "default/nginx" -j KUBE-MARK-MASQ
-A KUBE-SEP-ZGRMGTC2RMJMJM3K -p tcp -m comment --comment "default/nginx" -m tcp -j DNAT --to-destination 10.200.1.2:80
-A KUBE-SERVICES -d 10.32.0.192/32 -p tcp -m comment --comment "default/nginx cluster IP" -m tcp --dport 80 -j KUBE-SVC-2CMXP7HKUVJN7L6M
-A KUBE-SVC-2CMXP7HKUVJN7L6M ! -s 10.200.0.0/16 -d 10.32.0.192/32 -p tcp -m comment --comment "default/nginx cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
-A KUBE-SVC-2CMXP7HKUVJN7L6M -m comment --comment "default/nginx -> 10.200.1.2:80" -j KUBE-SEP-ZGRMGTC2RMJMJM3K