kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.43k stars 4.88k forks source link

Allow to reuse podman images of the podman machine when using podman driver #17415

Open benoitf opened 1 year ago

benoitf commented 1 year ago

What Happened?

Minikube allows to run a kubernetes cluster on top of a container engine like docker or podman.

We can also specify a runtime with minikube like docker, containerd or cri-o.

In case of cri-o engine, it's nice as podman is also available inside the container

image

so with minikube podman-env there is a way to use podman engine inside minikube container and then share images with cri-o engine as it's sharing /var/lib/containers path

Minikube explains different ways of pushing images to the cluster https://minikube.sigs.k8s.io/docs/handbook/pushing/

I think there is room for another method. As a podman user, I would like to continue to build using the default podman running inside the podman machine and cherry on the cake, see the images I'm building in cri-o (so it's close of podman-env method but I shouldn't need to run podman inside podman, just use the podman on the podman machine/host.

so if we could bind the podman host /var/lib/containers to the minikube container and that cri-o on that container use this mount folder, then as a user, I could do

podman build -t my-image . and then reference in a k8s yaml this image and doing kubectl apply -f mypod.yaml quickly spin a pod

image

it requires to add inside crio.conf the path of the mounted folder of the podman machine host rather than defaulting to the /var/lib

so, to sum up:

start minikube with an extra mount path

something like

minikube start --driver=podman --container-runtime=cri-o --mount --mount-string=/var/lib/containers:/host-containers

and then add a way in kicbase image to add the extra crio configuration

[crio]

# Path to the "root directory". CRI-O stores all of its data, including
# containers images, in this directory.
# root = "/var/lib/containers/storage"
root = "/host-containers/storage"

maybe the mount folder could be the rootless host path (so instead of mounting /var/lib/containers, mounting /var/home/core/.local/share/containers)

How do you see how we could have a toggle for changing/updating crio configuration to use the underlying images folder.

on a side issue, could cri-o runtime and podman runtime could be updated as well inside the kicbase image ?

Attach the log file

N/A

Operating System

macOS (Default)

Driver

Podman

afbjorklund commented 1 year ago

on a side issue, could cri-o runtime and podman runtime could be updated as well inside the kicbase image ?

Historically the cri-o was kept within one version of the kubernetes, but the release schedule has been changed.

Now it is being released together with kubernetes, so it should probably be updated even more often now...

But this is outdated:

CRIO_VERSION="1.24"

The podman version could be updated, it was just using the available Kubic packages (and didn't need newer)

afbjorklund commented 1 year ago

I noticed that the lock files are in the runroot, so probably should make both of them available?

# Path to the "root directory". CRI-O stores all of its data, including
# containers images, in this directory.
# root = "/var/lib/containers/storage"

# Path to the "run directory". CRI-O stores all of its state in this directory.
# runroot = "/run/containers/storage"

Something like:

/host-containers/var/storage /host-containers/run/storage

afbjorklund commented 1 year ago

How do you see how we could have a toggle for changing/updating crio configuration to use the underlying images folder.

Some of the other runtime values are configurable, such as the image repository. Maybe root/runroot could be, too?

   --root value, -r value                                     The CRI-O root directory. (default: "/var/lib/containers/storage") [$CONTAINER_ROOT]
   --runroot value                                            The CRI-O state directory. (default: "/run/containers/storage") [$CONTAINER_RUNROOT]
      --data-root string                        Root directory of persistent Docker state (default "/var/lib/docker")
      --exec-root string                        Root directory for execution state files (default "/var/run/docker")
   --root value                 containerd root directory
   --state value                containerd state directory

Something rhyming with "root" and "state"


Note: for this feature to work with Docker, it requires a version that uses containerd for the image storage:

https://docs.docker.com/storage/containerd/

It separates the images by "namespace", but they would be sharing layers and so on.

benoitf commented 1 year ago

@afbjorklund yes if we can have --root --runroot or --data-root yes it would help to setup these things

About cri-o's version, do you see that as a separate enhancement or we need to link all of them under the same umbrella ?

afbjorklund commented 1 year ago

I think the versions are separate issue(s), this is more about being able to share the storage with the engine.

Especially since we might need to decouple the CR installation from the OS installation, in the future:

Previously you would just install something like "Docker", and it would work with all the Kubernetes versions.

But now you need a specific version of CRI and sometimes even a special version of the runtime (CRI-O), for each

afbjorklund commented 1 year ago

@benoitf : added a discussion topic for the "Podman Community Cabal" tomorrow, to get some input from the team

https://podman.io/community

benoitf commented 1 year ago

@afbjorklund ok thanks for the topic discussion πŸ‘

On my side unfortunately I'll be in another call at that time. I'll watch the recording.

afbjorklund commented 1 year ago

I was trying to start minikube in podman machine, and it didn't work at all. Seemed to have issues both with the default "netavark" network backend, and with the original "cni" network backend. And minikube doesn't support it, on Linux.

The first issue with podman-remote-static v4.7.0 was that it defaulted to 1 vCPU and rootless, changed to 2 CPU and root.

β›” Exiting due to RSRC_INSUFFICIENT_CORES: Requested cpu count 2 is greater than the available cpus of 1

Then minikube wouldn't recognize the response of the network query, so wasn't able to determine the IP of the node.

❌ Exiting due to GUEST_PROVISION: error provisioning guest: ExcludeIP() requires IP or CIDR as a parameter


[core@localhost ~]$ minikube start --driver=podman --container-runtime=cri-o --mount --mount-string=/run/containers/storage:/run/containers/storage --mount-string=/var/lib/containers/storage:/var/lib/containers/storage --preload=false

Still no luck, even with CNI:

[core@localhost ~]$ sudo -n podman container inspect -f {{.NetworkSettings.IPAddress}} minikube

[core@localhost ~]$ sudo -n podman container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
,

Setting --static-ip gives error:

plugin type="bridge" failed (add): cni plugin bridge failed: failed to allocate for range 0: requested IP address 192.168.49.2 is not available in range set 192.168.49.1-192.168.49.254

So in the end, it is a failure.

❌ Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: start: sudo -n podman start --cgroup-manager cgroupfs minikube: exit status 125


It started up OK with 3.4.2.

Which still used cgroups v1.

EDIT: Recreated the VM, and everything seems fine. Maybe it was the new Fedora CoreOS, maybe it was a fluke.

Nope, it is actually the /var/lib/containers/storage volume mount that destroys the network, go figure.

benoitf commented 1 year ago

@afbjorklund If I use --mount and --mount-string I'm not able to start minikube in podman rootless mode (but it's ok in rootful mode)

minikube start --driver=podman --container-runtime=cri-o --base-image=quay.io/fbenoit/kicbase:2023-10-12 --mount --mount-string "/var/lib/containers:/host-containers"

❌  Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: setting up container node: creating volume for minikube container: podman volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true: exit status 125

is it a known issue ?

afbjorklund commented 1 year ago

There are several known issues, with regards to running minikube on top of rootless podman:

https://github.com/kubernetes/minikube/issues?q=is%3Aopen+is%3Aissue+label%3Aco%2Fpodman-driver

But I am not sure if volume create is normally the one that fails, so needs the log to know why

podman volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true

benoitf commented 1 year ago

logs.txt

afbjorklund commented 11 months ago

The new docker version has an option to use containerd for storage, that means it could also share images...

https://docs.docker.com/storage/containerd/

The containerd image store is an experimental feature of Docker Engine

We should discuss making this deployment "form factor" (shared storage) into a standard feature of minikube.

afbjorklund commented 11 months ago

Legacy Docker

legacy_docker

Also including other hypervisors (instead of VirtualBox) and other runtimes (instead of Docker), with same setup

Docker / Podman

minikube_storage

Including both Docker Engine / Podman Engine (on Linux), and Docker Desktop / Podman Desktop (external VM)

k8s-triage-robot commented 8 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

benoitf commented 8 months ago

/remove-lifecycle stale

k8s-triage-robot commented 5 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

benoitf commented 5 months ago

/remove-lifecycle stale

afbjorklund commented 2 months ago

The shared storage feature seems a way off, but there should be an upgrade from Podman 3.x to 4.x coming soon.

I know that 4.9 is EOL, but there are a lot of big changes in 5.x that are not needed for minikube and cri-o usage...

The short term recommendation for better performance, is to use a registry.

The default is saving and loading (just like in kind), so that is going to be slow.

afbjorklund commented 2 months ago

I think this should be added to the roadmap for 2025, to have this as a standard feature.

Something to discuss on the meeting on Aug 26, since many people still OOO for Aug 12.

fabricepipart1a commented 1 month ago

Hi

Just a comment to raise awareness that this is a major enhancement we are waiting for in our Company. We are several thousands developers that migrated from a docker environment to a podman + kind environment. We are globally satisfied excepted one particular aspect that made us lose a lot of comfort: the need to save and load all images we build locally.

The typical use case is rather simple: Maven build that packages an image (with podman) or a plain podman command that generates a new image. We then want to run that image locally to run some checks on it (automated or manual). Any developer does that many times a day. People are globally satisfied with the podman desktop migration. But the management of the image storage always come in the conversation if we ask what could be improved. On top, the storage requirement is doubled since we store the images in and out of Kube. This regularly leads to issues with the podman machine size.

All that to say that this Issue is extremely important to us and that we'd be glad to see it progress. I can't help with its development I am afraid (but I trust @benoitf ) but tell me if I can help in any other way!

afbjorklund commented 1 month ago

Improving Podman and CRI-O support was added to the roadmap, but there is nobody working on it so far.

https://minikube.sigs.k8s.io/docs/contrib/roadmap/

The main focus for minikube in 2025 is changing from docker to containerd, and from hyperkit to vfkit.


The Podman Desktop GUI comes with kind by default, and it does have the "double images" requirement...

https://kind.sigs.k8s.io/docs/user/quick-start/#loading-an-image-into-your-cluster

There is a plugin for minikube, as an add-on. https://github.com/containers/podman-desktop-extension-minikube