Open vikas027 opened 3 years ago
Thanks @vikas027 - I agree that this is something that we need to improve. We're taking a look at improving our support for different containers and runtimes now and this is something that we'll take into account. Appreciate the feedback, we hope to have some news here soon.
Any update about this? Images with Docker in docker is a segurity risk and today docker is deprecated in Cloud environment with kubernetes, so docker.sock won't found any more.
Here an issue related with the alternative (deploy docker cli without docker daemon instance).. conclusion it docker always need a daemon started with its docker.sock, so this confirm there is not alternative to use docker in docker to pull images in nodes migrated with new dockershim like contairnerd
Dockershim has been deprecated in Kubernetes on 1.23 and removed for 1.24. This follows the guidance set by the Kubernetes team. This absolutely needs to be prioritized.
https://kubernetes.io/blog/2022/01/07/kubernetes-is-moving-on-from-dockershim/ https://docs.aws.amazon.com/eks/latest/userguide/dockershim-deprecation.html
@TingluoHuang To underscore the urgency here, Kubernetes' version policy is to support the three most recent minor releases, which are currently 1.24-1.26. In other words, the docker runtime is no longer available in any kubernetes version with upstream support.
Chiming in as an affected consumer, with EKS dropping K8S 1.22 in June. I'm reading what may be some misunderstandings of the capabilities of containerd and the desire to use docker in runner pods. If runner pods were able to call docker and launch containers before, they will still be able to after upgrading to the latest K8S version. At least they will if you're using the summerwind-dind container or built your own and borrowed the pieces that install dockerd and supervisord. The ARC dind containers are already launched with the privileged security context, so dockerd will still work so long as the node has docker engine installed, and the runnerdeployment mounts the socket and /var/lib/docker into the runner.
kubectl version --short
Working in namespace default!
Flag --short has been deprecated, and will be removed in the future. The --short output will become the default.
Client Version: v1.25.4
Kustomize Version: v4.5.7
Server Version: v1.25.6-eks-48e63af
...
kubectl exec -it corrigat-testing -- /bin/bash
builder@corrigat-testing:/$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
...
builder@corrigat-testing:/$ docker run alpine:latest echo hello
Unable to find image 'alpine:latest' locally
latest: Pulling from library/alpine
63b65145d645: Pull complete
Digest: sha256:ff6bdca1701f3a8a67e328815ff2346b0e4067d32ec36b7992c1fdc001dc8517
Status: Downloaded newer image for alpine:latest
hello
The difference in K8S without dockershim is we can no longer share the imagefs directory or runtime socket with the runner pods. While this does not totally affect the ability to process workflow jobs, and even workflows building or using containers with docker will continue to work, without containerd support the affect is image pulls in new and ephemeral runners will always require a full image fetch from the registry. It has been a great help to mount the docker socket and imagefs into runner pods - we have processed 2,325,917 (accurate as of now) workflow runs with ARC, and saved at least 2/3 of that times our 3GB image size in image pulls and the associated time. It would be fantastic to be able to continue that with containerd. Additionally, imagefs storage sizing will need to be reconsidered to facilitate the additional copies of container images stored locally in each pod's overlayfs on worker nodes.
I've played with just continuing to install docker in our worker node AMI, but Kubelet doesn't try to manage anything docker related under disk pressure, so the node just fills up and dies.
Looking forward to updates here.
Any updates on this?
Any chances to see this feature soon?
Would love to see this feature as well. We have reverted back to EC2 based runners for now for container jobs
I use Bottlerocket AMI on EKS clusters which use containerd that does not uses docker or docker socket.
Custom actions fails with errors like these. As a workaround, is there a way I can use a pre-pulled docker image instead of GitHub action trying to build an image on the fly?