kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.34k stars 4.88k forks source link

Impossible de démarrer minikube avec podman sur Windows 11 + WSL 2 #16460

Closed arkabase closed 7 months ago

arkabase commented 1 year ago

Étapes pour reproduire le problème:

  1. Installation de podman
  2. Installation de minikube
  3. Dans une console Powershell administrateur :
    podman machine init --cpus 2 --memory 2048 --disk-size 20
    podman machine start
    podman machine set --rootful
    podman system connection default podman-machine-default-root
    minikube config set driver podman
    minikube start --driver=podman --container-runtime=cri-o

Sortie complète de la commande minikube logs: logs.txt

Sortie complète de la commande échouée:

* minikube v1.30.1 sur Microsoft Windows 11 Home 10.0.22621.1555 Build 22621.1555 * Utilisation du pilote podman (expérimental) basé sur la configuration de l'utilisateur * Utilisation du pilote Podman avec le privilège root * Démarrage du noeud de plan de contrôle minikube dans le cluster minikube * Extraction de l'image de base... E0506 23:05:15.685128 18640 cache.go:188] Error downloading kic artifacts: not yet implemented, see issue #8426 * Création de podman container (CPUs=2, Memory=4000Mo) ... * Préparation de Kubernetes v1.26.3 sur CRI-O 1.24.4... E0506 23:05:53.976739 18640 start.go:131] Unable to get host IP: RoutableHostIPFromInside is currently only implemented for linux - Génération des certificats et des clés - Démarrage du plan de contrôle ... - Configuration des règles RBAC ... * Configuration de CNI (Container Networking Interface)... ! l'initialisation a échoué, va réessayer : apply cni: cni apply: cmd: sudo /var/lib/minikube/binaries/v1.26.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml output: ** stderr ** error when retrieving current configuration of: Resource: "rbac.authorization.k8s.io/v1, Resource=clusterroles", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRole" Name: "kindnet", Namespace: "" from server for: "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles/kindnet": dial tcp 127.0.0.1:8443: connect: connection refused - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=53, ErrCode=NO_ERROR, debug="" error when retrieving current configuration of: Resource: "rbac.authorization.k8s.io/v1, Resource=clusterrolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding" Name: "kindnet", Namespace: "" from server for: "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/kindnet": dial tcp 127.0.0.1:8443: connect: connection refused error when retrieving current configuration of: Resource: "/v1, Resource=serviceaccounts", GroupVersionKind: "/v1, Kind=ServiceAccount" Name: "kindnet", Namespace: "kube-system" from server for: "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api/v1/namespaces/kube-system/serviceaccounts/kindnet": dial tcp 127.0.0.1:8443: connect: connection refused error when retrieving current configuration of: Resource: "apps/v1, Resource=daemonsets", GroupVersionKind: "apps/v1, Kind=DaemonSet" Name: "kindnet", Namespace: "kube-system" from server for: "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet": dial tcp 127.0.0.1:8443: connect: connection refused ** /stderr **: sudo /var/lib/minikube/binaries/v1.26.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: Process exited with status 1 stdout: stderr: error when retrieving current configuration of: Resource: "rbac.authorization.k8s.io/v1, Resource=clusterroles", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRole" Name: "kindnet", Namespace: "" from server for: "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles/kindnet": dial tcp 127.0.0.1:8443: connect: connection refused - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=53, ErrCode=NO_ERROR, debug="" error when retrieving current configuration of: Resource: "rbac.authorization.k8s.io/v1, Resource=clusterrolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding" Name: "kindnet", Namespace: "" from server for: "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/kindnet": dial tcp 127.0.0.1:8443: connect: connection refused error when retrieving current configuration of: Resource: "/v1, Resource=serviceaccounts", GroupVersionKind: "/v1, Kind=ServiceAccount" Name: "kindnet", Namespace: "kube-system" from server for: "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api/v1/namespaces/kube-system/serviceaccounts/kindnet": dial tcp 127.0.0.1:8443: connect: connection refused error when retrieving current configuration of: Resource: "apps/v1, Resource=daemonsets", GroupVersionKind: "apps/v1, Kind=DaemonSet" Name: "kindnet", Namespace: "kube-system" from server for: "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet": dial tcp 127.0.0.1:8443: connect: connection refused - Génération des certificats et des clés - Démarrage du plan de contrôle ... - Configuration des règles RBAC ... * Configuration de CNI (Container Networking Interface)... * X Erreur lors du démarrage du cluster: apply cni: cni apply: cmd: sudo /var/lib/minikube/binaries/v1.26.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml output: ** stderr ** error when retrieving current configuration of: Resource: "rbac.authorization.k8s.io/v1, Resource=clusterroles", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRole" Name: "kindnet", Namespace: "" from server for: "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles/kindnet": dial tcp 127.0.0.1:8443: connect: connection refused - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=3, ErrCode=NO_ERROR, debug="" error when retrieving current configuration of: Resource: "rbac.authorization.k8s.io/v1, Resource=clusterrolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding" Name: "kindnet", Namespace: "" from server for: "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/kindnet": dial tcp 127.0.0.1:8443: connect: connection refused error when retrieving current configuration of: Resource: "/v1, Resource=serviceaccounts", GroupVersionKind: "/v1, Kind=ServiceAccount" Name: "kindnet", Namespace: "kube-system" from server for: "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api/v1/namespaces/kube-system/serviceaccounts/kindnet": dial tcp 127.0.0.1:8443: connect: connection refused error when retrieving current configuration of: Resource: "apps/v1, Resource=daemonsets", GroupVersionKind: "apps/v1, Kind=DaemonSet" Name: "kindnet", Namespace: "kube-system" from server for: "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet": dial tcp 127.0.0.1:8443: connect: connection refused ** /stderr **: sudo /var/lib/minikube/binaries/v1.26.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: Process exited with status 1 stdout: stderr: error when retrieving current configuration of: Resource: "rbac.authorization.k8s.io/v1, Resource=clusterroles", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRole" Name: "kindnet", Namespace: "" from server for: "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles/kindnet": dial tcp 127.0.0.1:8443: connect: connection refused - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=3, ErrCode=NO_ERROR, debug="" error when retrieving current configuration of: Resource: "rbac.authorization.k8s.io/v1, Resource=clusterrolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding" Name: "kindnet", Namespace: "" from server for: "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/kindnet": dial tcp 127.0.0.1:8443: connect: connection refused error when retrieving current configuration of: Resource: "/v1, Resource=serviceaccounts", GroupVersionKind: "/v1, Kind=ServiceAccount" Name: "kindnet", Namespace: "kube-system" from server for: "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api/v1/namespaces/kube-system/serviceaccounts/kindnet": dial tcp 127.0.0.1:8443: connect: connection refused error when retrieving current configuration of: Resource: "apps/v1, Resource=daemonsets", GroupVersionKind: "apps/v1, Kind=DaemonSet" Name: "kindnet", Namespace: "kube-system" from server for: "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet": dial tcp 127.0.0.1:8443: connect: connection refused X Fermeture en raison de GUEST_START : failed to start node: apply cni: cni apply: cmd: sudo /var/lib/minikube/binaries/v1.26.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml output: ** stderr ** error when retrieving current configuration of: Resource: "rbac.authorization.k8s.io/v1, Resource=clusterroles", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRole" Name: "kindnet", Namespace: "" from server for: "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles/kindnet": dial tcp 127.0.0.1:8443: connect: connection refused - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=3, ErrCode=NO_ERROR, debug="" error when retrieving current configuration of: Resource: "rbac.authorization.k8s.io/v1, Resource=clusterrolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding" Name: "kindnet", Namespace: "" from server for: "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/kindnet": dial tcp 127.0.0.1:8443: connect: connection refused error when retrieving current configuration of: Resource: "/v1, Resource=serviceaccounts", GroupVersionKind: "/v1, Kind=ServiceAccount" Name: "kindnet", Namespace: "kube-system" from server for: "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api/v1/namespaces/kube-system/serviceaccounts/kindnet": dial tcp 127.0.0.1:8443: connect: connection refused error when retrieving current configuration of: Resource: "apps/v1, Resource=daemonsets", GroupVersionKind: "apps/v1, Kind=DaemonSet" Name: "kindnet", Namespace: "kube-system" from server for: "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet": dial tcp 127.0.0.1:8443: connect: connection refused ** /stderr **: sudo /var/lib/minikube/binaries/v1.26.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: Process exited with status 1 stdout: stderr: error when retrieving current configuration of: Resource: "rbac.authorization.k8s.io/v1, Resource=clusterroles", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRole" Name: "kindnet", Namespace: "" from server for: "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles/kindnet": dial tcp 127.0.0.1:8443: connect: connection refused - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=3, ErrCode=NO_ERROR, debug="" error when retrieving current configuration of: Resource: "rbac.authorization.k8s.io/v1, Resource=clusterrolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding" Name: "kindnet", Namespace: "" from server for: "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/kindnet": dial tcp 127.0.0.1:8443: connect: connection refused error when retrieving current configuration of: Resource: "/v1, Resource=serviceaccounts", GroupVersionKind: "/v1, Kind=ServiceAccount" Name: "kindnet", Namespace: "kube-system" from server for: "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api/v1/namespaces/kube-system/serviceaccounts/kindnet": dial tcp 127.0.0.1:8443: connect: connection refused error when retrieving current configuration of: Resource: "apps/v1, Resource=daemonsets", GroupVersionKind: "apps/v1, Kind=DaemonSet" Name: "kindnet", Namespace: "kube-system" from server for: "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet": dial tcp 127.0.0.1:8443: connect: connection refused
arkabase commented 1 year ago

After uninstalling and reinstalling last versions of podman and minikube, I still can't start minikube but with another error :

PS C:\Windows\System32> minikube start --driver=podman --container-runtime=cri-o
* minikube v1.30.1 sur Microsoft Windows 11 Home 10.0.22621.1702 Build 22621.1702
* Utilisation du pilote podman (expérimental) basé sur la configuration de l'utilisateur
* Utilisation du pilote Podman avec le privilège root
* Démarrage du noeud de plan de contrôle minikube dans le cluster minikube
* Extraction de l'image de base...
E0511 17:31:39.598357   11380 cache.go:188] Error downloading kic artifacts:  not yet implemented, see issue #8426
* Création de podman container (CPUs=2, Memory=4000Mo) ...
! StartHost a échoué, mais va réessayer : creating host: create: creating: setting up container node: preparing volume for minikube container: podman run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.39 -d /var/lib: exit status 126
stdout:

stderr:
Your kernel does not support pids limit capabilities or the cgroup is not mounted. PIDs limit discarded.
Error: preparing container 080051dcc27df6c75e4bf3ad3681bd4ba08924a76660b2e4039be830f001ad72 for attach: crun: cgroups in hybrid mode not supported, drop all controllers from cgroupv2: OCI runtime error

* Redémarrage du podman container existant pour "minikube" ...
* Échec du démarrage de podman container. L'exécution de "minikube delete" peut résoudre le problème : driver start: start: podman start minikube: exit status 125
stdout:

stderr:
Error: no container with name or ID "minikube" found: no such container

X Fermeture en raison de GUEST_PROVISION : error provisioning guest: Failed to start host: driver start: start: podman start minikube: exit status 125
stdout:

stderr:
Error: no container with name or ID "minikube" found: no such container

*

WSL and Podman are running fine, and I run the command from an elevated Powershell, so I don't understand the message "Your kernel does not support pids limit capabilities or the cgroup is not mounted. PIDs limit discarded."

I already succeeded to run podman and minikube on another Windows 11 machine without error...

k8s-triage-robot commented 9 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 8 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 7 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 7 months ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes/minikube/issues/16460#issuecomment-2008662363): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.