kubernetes-sigs / kind

Kubernetes IN Docker - local clusters for testing Kubernetes
https://kind.sigs.k8s.io/
Apache License 2.0
13.02k stars 1.51k forks source link

permission bug #3583

Closed ls-2018 closed 2 months ago

ls-2018 commented 2 months ago

image

What happened:

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

Client: Version: 26.0.0 Context: desktop-linux Debug Mode: false Plugins: buildx: Docker Buildx (Docker Inc.) Version: v0.13.1-desktop.1 Path: /Users/acejilam/.docker/cli-plugins/docker-buildx compose: Docker Compose (Docker Inc.) Version: v2.26.1-desktop.1 Path: /Users/acejilam/.docker/cli-plugins/docker-compose debug: Get a shell into any image or container. (Docker Inc.) Version: 0.0.27 Path: /Users/acejilam/.docker/cli-plugins/docker-debug dev: Docker Dev Environments (Docker Inc.) Version: v0.1.2 Path: /Users/acejilam/.docker/cli-plugins/docker-dev extension: Manages Docker extensions (Docker Inc.) Version: v0.2.23 Path: /Users/acejilam/.docker/cli-plugins/docker-extension feedback: Provide feedback, right in your terminal! (Docker Inc.) Version: v1.0.4 Path: /Users/acejilam/.docker/cli-plugins/docker-feedback init: Creates Docker-related starter files for your project (Docker Inc.) Version: v1.1.0 Path: /Users/acejilam/.docker/cli-plugins/docker-init sbom: View the packaged-based Software Bill Of Materials (SBOM) for an image (Anchore Inc.) Version: 0.6.0 Path: /Users/acejilam/.docker/cli-plugins/docker-sbom scout: Docker Scout (Docker Inc.) Version: v1.6.3 Path: /Users/acejilam/.docker/cli-plugins/docker-scout

Server: Containers: 5 Running: 5 Paused: 0 Stopped: 0 Images: 12 Server Version: 26.0.0 Storage Driver: overlay2 Backing Filesystem: extfs Supports d_type: true Using metacopy: false Native Overlay Diff: true userxattr: false Logging Driver: json-file Cgroup Driver: cgroupfs Cgroup Version: 2 Plugins: Volume: local Network: bridge host ipvlan macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog Swarm: inactive Runtimes: io.containerd.runc.v2 runc Default Runtime: runc Init Binary: docker-init containerd version: ae07eda36dd25f8a1b98dfbf587313b99c0190bb runc version: v1.1.12-0-g51d5e94 init version: de40ad0 Security Options: seccomp Profile: unconfined cgroupns Kernel Version: 6.6.22-linuxkit Operating System: Docker Desktop OSType: linux Architecture: aarch64 CPUs: 20 Total Memory: 15.6GiB Name: docker-desktop ID: cc73165a-2168-4aed-8d3a-8a320d730c66 Docker Root Dir: /var/lib/docker Debug Mode: false HTTP Proxy: http.docker.internal:3128 HTTPS Proxy: http.docker.internal:3128 No Proxy: hubproxy.docker.internal Labels: com.docker.desktop.address=unix:///Users/acejilam/Library/Containers/com.docker.docker/Data/docker-cli.sock Experimental: false Insecure Registries: harbor.k8s.com harbor.vackbot.com hubproxy.docker.internal:5555 core.harbor.service.com core.harbor.service.com:30333 127.0.0.0/8 Registry Mirrors: https://docker.m.daocloud.io/ https://registry.docker-cn.com/ http://hub-mirror.c.163.com/ https://docker.mirrors.ustc.edu.cn/ Live Restore Enabled: false

WARNING: daemon is not using the default seccomp profile

`

stmcginnis commented 2 months ago

Hey @ls-2018, it looks like you are running a very old version of kind:

kind v0.17.0

The first thing you should do is upgrade to the latest version. Once you've done that, if you still have an issue creating a cluster then please provide all of the details from the issue template.

There has been a lot that has changed since the v0.17.0 release, so there's a good chance the newer version will work better with current things like cgroupv2, etc. But if you still have problems after upgrading, run kind create cluster --retain to keep the cluster from being deleted on failure, then kind export logs to get all of the logging. Somewhere in those logs should be clues pointing to the cause of failures.

The troubleshooting docs can also be helpful.

/remove-kind bug /kind support

BenTheElder commented 2 months ago

Also the logs are from kubeadm, not kind, so this is probably an old fixed bug in Kubernetes.