Open hadrabap opened 1 year ago
have you followed the instructions in https://kind.sigs.k8s.io/docs/user/rootless/?
Hello, thank you very much for quick response!
I tried installing iptables-related modules manually:
[root@sws netfilter]# modprobe ip_tables
[root@sws netfilter]# modprobe ip6_tables
[root@sws netfilter]# modprobe iptable_nat
[root@sws netfilter]# modprobe ip6table_nat
Next, I did
DOCKER_HOST=unix://${XDG_RUNTIME_DIR}/podman/podman.sock KIND_EXPERIMENTAL_PROVIDER=podman kind create cluster -v 9999 --retain
with the same results.
Well, the iptables-related complaints vanished, but the overall situation is the same.
I'm attaching new logs: kind-logs2.zip
containerd fails to create the pods
May 17 10:04:40 kind-control-plane containerd[125]: time="2023-05-17T10:04:40.425885361+02:00" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-kind-control-plane,Uid:7383deaf095def706037bdaee8fbf8ea,Namespace:kube-system,Attempt:0,} failed, error" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting \"/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/9a8ceb3f28f5d6c1bc05b32c0dc0b9e20a70d24e9bc102b402b54bc1d4db4368/resolv.conf\" to rootfs at \"/etc/resolv.conf\": mount /var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/9a8ceb3f28f5d6c1bc05b32c0dc0b9e20a70d24e9bc102b402b54bc1d4db4368/resolv.conf:/etc/resolv.conf (via /proc/self/fd/6), flags: 0x5021: operation not permitted: unknown"
maybe a problem with the storage?
@AkihiroSuda does this ring a bell?
Sorry, I forgot to mention that all my filesystems are XFS only. If that helps…
mount /var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/9a8ceb3f28f5d6c1bc05b32c0dc0b9e20a70d24e9bc102b402b54bc1d4db4368/resolv.conf:/etc/resolv.conf (via /proc/self/fd/6), flags: 0x5021: operation not permitted: unknown
This may work?
With me I also can't upload the cluster with a normal user, I've tried everything and nothing. With sudo or ROOT it works normally. kind create cluster --name k8s-kind-cl.md
Hello friends!
The runc#3805 is merged in master. It looks like it is not intended for the 1.1 branch. But anyhow. That could not stop me from trying.
I built myself the master of runc at commit a6985522a6 and "patched" the official kindest:node like this:
Containerfile
:
FROM kindest/node:v1.27.2@sha256:3966ac761ae0136263ffdb6cfd4db23ef8a83cba8a463690e98317add2c9ba72
RUN rm -f /usr/local/sbin/runc
COPY runc /usr/local/sbin/
RUN chmod +x /usr/local/sbin/runc
build.sh
:
#!/bin/bash
podman build --rm \
-f Containerfile \
--squash \
-t kindest/node:v1.27.2-runc
Finally I created an cluster:
[opc@sws runc-test]$ KIND_EXPERIMENTAL_PROVIDER=podman kind create cluster --image localhost/kindest/node:v1.27.2-runc
using podman due to KIND_EXPERIMENTAL_PROVIDER
enabling experimental podman provider
Creating cluster "kind" ...
✓ Ensuring node image (localhost/kindest/node:v1.27.2-runc) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
Set kubectl context to "kind-kind"
You can now use your cluster with:
kubectl cluster-info --context kind-kind
Thanks for using kind! 😊
and a single test
[opc@sws runc-test]$ kubectl --context kind-kind get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-5d78c9869d-p84p7 1/1 Running 0 9s
kube-system coredns-5d78c9869d-q5zgz 1/1 Running 0 9s
kube-system etcd-kind-control-plane 1/1 Running 0 25s
kube-system kindnet-x8s45 1/1 Running 0 10s
kube-system kube-apiserver-kind-control-plane 1/1 Running 0 23s
kube-system kube-controller-manager-kind-control-plane 1/1 Running 0 23s
kube-system kube-proxy-f8lc9 1/1 Running 0 10s
kube-system kube-scheduler-kind-control-plane 1/1 Running 0 23s
local-path-storage local-path-provisioner-6bc4bddd6b-hvfxg 1/1 Running 0 9s
shows the cluster is up-and-ready.
When I take a look into events, there are only warnings (apart of normal ones) complaining about DNS:
kube-system 2m54s (x4 over 2m57s) Warning DNSConfigForming Pod/kube-controller-manager-kind-control-plane Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 10.89.0.1 fc00:f853:ccd:e793::1 192.168.1.10
So far, so good.
I hope this helps somebody.
Thank you!
Hello friends!
Are there any plans or tactic how to get this resolved?
Thank you.
Sorry ... I don't work with podman regularly, the Kubernetes project requires docker to develop, so this is something we're looking for contributors to help maintain. It is very time consuming to debug issues with arbitrary linux environments.
Thankfully, you've done that part, but it's stopped moving forward because this is only in run 1.2.x which is not released still.
We take normal runc updates regularly.
https://github.com/opencontainers/runc/pull/3805/commits => https://github.com/opencontainers/runc/pull/3805/commits/da780e4d275444e9be5fc75d2005f51d71669a8e => https://github.com/opencontainers/runc/commit/da780e4d275444e9be5fc75d2005f51d71669a8e
This commit is only in the 1.2.x RCs, so it will be a while before we take it. We do not wish to make existing stable systems unstable.
I would recommend using ext4 to run containers, especially if you're going to do container-in-container, there have been are a LOT of problems with detecting filesystem info, mounts, etc. that are not limited to code in this repo or runc and sticking to the most widely used tools (docker, ext4) is the most reliable path. You can see a number of other issues in the tracker where other filesystems caused issues for kubelet etc.
What happened:
Hello friends,
As I have great success with KinD with Docker Desktop on Intel Mac, it has been my first choice to use it on my Linux box. Unfortunately I'm unable to create cluster.
I've been poking around and found an interesting issue, which has similar symptoms as I'm experiencing—https://github.com/kubernetes-sigs/kind/issues/3061.
In short—details below—the
kind create cluster --config config.yaml -v 9999 --retain
fails withI found out that kube apiserver is not running (hence the 6443 port is not listening).
There are two issues which caught my attention:
What you expected to happen:
The cluster spins-up and is ready to use.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
config.yaml
is used or not.The iptables excerpt
The permissions-related excerpt
kind-logs.zip
Environment:
kind version
):docker info
orpodman info
):/etc/os-release
):kubectl version
):The system uses Oracle Unbreakable Kernel instead of RedHat one:
Next, the system is switched from CGroupsV1 to CGroupsV2 with delegation/propagation. That works without issues in other containers.
Attached logs: kind-logs.zip