k3d-io / k3d

Little helper to run CNCF's k3s in Docker
https://k3d.io/
MIT License
5.46k stars 462 forks source link

[BUG] After the k3d cluster is restarted, the Naked Pod on the server node is automatically removed. #1486

Open braveantony opened 3 months ago

braveantony commented 3 months ago

What did you do

2. Add a NodeSelector to specify that the pod runs on the server node.

$ nano nginx.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: nginx name: nginx spec: containers:

3. Run Pod

$ sudo kubectl apply -f nginx.yaml

4. Check Pod is running on server node

$ sudo kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx 1/1 Running 0 4s 10.42.2.12 k3d-bobo-server-0

5. Stop K3d Cluster

$ sudo k3d cluster stop bobo

6. Start K3d Cluster

$ sudo k3d cluster start bobo

7. Check Pod Status

$ sudo kubectl get pods -o wide No resources found in default namespace.

8. Check Events

$ sudo kubectl get events --field-selector involvedObject.name=nginx --sort-by='{.metadata.creationTimestamp}' LAST SEEN TYPE REASON OBJECT MESSAGE 119s Normal Scheduled pod/nginx Successfully assigned default/nginx to k3d-bobo-server-0 119s Normal Pulling pod/nginx Pulling image "nginx" 105s Normal Pulled pod/nginx Successfully pulled image "nginx" in 13.736s (13.736s including waiting) 105s Normal Created pod/nginx Created container nginx 105s Normal Started pod/nginx Started container nginx 86s Normal Killing pod/nginx Stopping container nginx


## What did you expect to happen

Naked Pod should still running on server node after k3d cluster is restarted.
I tested running a naked pod on agent node with the exact same steps, and the pod is still running after k3d Cluster reboot.

## Which OS & Architecture

$ sudo k3d runtime-info arch: amd64 cgroupdriver: cgroupfs cgroupversion: "2" endpoint: /var/run/docker.sock filesystem: extfs infoname: rch155 name: docker os: alpine ostype: linux version: 5.0.3

$ cat /etc/os-release NAME="Alpine Linux" ID=alpine VERSION_ID=3.20.2 PRETTY_NAME="Alpine Linux v3.20" HOME_URL="https://alpinelinux.org/" BUG_REPORT_URL="https://gitlab.alpinelinux.org/alpine/aports/-/issues"


## Which version of `k3d`

$ k3d version k3d version v5.7.2 k3s version v1.29.6-k3s2 (default)


## Which version of podman

$ podman version Client: Podman Engine Version: 5.0.3 API Version: 5.0.3 Go Version: go1.22.5 Built: Mon Jul 8 01:34:20 2024 OS/Arch: linux/amd64

$ podman info host: arch: amd64 buildahVersion: 1.35.4 cgroupControllers:

braveantony commented 3 months ago

I also found that if I shut down the Podman Host machine directly and start the k3d cluster after booting, the naked pod on the Server node is not automatically deleted.