kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.17k stars 4.87k forks source link

minikube won't start once configured with socket_vmnet #18640

Closed pascalrobert closed 3 days ago

pascalrobert commented 5 months ago

What Happened?

I need access to the load balancers, so I added socket_vmnet. But minikube won't start, even if the socket_vmnet service is started, and firewall rules are enabled. It worked just fine before enabling socket_vmnet.

W0415 05:32:23.216372   41144 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open /Users/probert/.docker/contexts/meta/37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f/meta.json: no such file or directory
😄  minikube v1.32.0 sur Darwin 14.4.1 (arm64)
✨  Utilisation du pilote qemu2 basé sur le profil existant
👍  Démarrage du noeud de plan de contrôle minikube dans le cluster minikube
🏃  Mise à jour du VM qemu2 en marche "minikube" ...
🐳  Préparation de Kubernetes v1.28.3 sur Docker 24.0.7...
🤦  Impossible de redémarrer le cluster, va être réinitialisé : apiserver health: apiserver healthz never reported healthy: context deadline exceeded
    ▪ Génération des certificats et des clés
    ▪ Démarrage du plan de contrôle ...
    ▪ Configuration des règles RBAC ...
🔗  Configuration de bridge CNI (Container Networking Interface)...
    ▪ Utilisation de l'image gcr.io/k8s-minikube/storage-provisioner:v5
E0415 05:37:25.138743   41144 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while getting "coredns" deployment scale: Get "https://10.0.2.15:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 10.0.2.15:8443: connect: network is unreachable
🔎  Vérification des composants Kubernetes...
❗  L'activation de 'default-storageclass' a renvoyé une erreur : running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: connect: network is unreachable]
🌟  Modules activés: storage-provisioner

❌  Fermeture en raison de GUEST_START : failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded

╭──────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                  │
│    😿  Si les conseils ci-dessus ne vous aident pas, veuillez nous en informer :                 │
│    👉  https://github.com/kubernetes/minikube/issues/new/choose                                  │
│                                                                                                  │
│    Veuillez exécuter `minikube logs --file=logs.txt` et attachez logs.txt au problème GitHub.    │
│                                                                                                  │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯

(base) ➜  harness uname -a
Darwin MacBook-Pro 23.4.0 Darwin Kernel Version 23.4.0: Fri Mar 15 00:10:42 PDT 2024; root:xnu-10063.101.17~1/RELEASE_ARM64_T6000 arm64

(base) ➜  harness cat /Users/probert/.minikube/profiles/minikube/config.json
{
    "Name": "minikube",
    "KeepContext": false,
    "EmbedCerts": false,
    "MinikubeISO": "https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-arm64.iso",
    "KicBaseImage": "gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0",
    "Memory": 4000,
    "CPUs": 2,
    "DiskSize": 20000,
    "VMDriver": "",
    "Driver": "qemu2",
    "HyperkitVpnKitSock": "",
    "HyperkitVSockPorts": [],
    "DockerEnv": null,
    "ContainerVolumeMounts": null,
    "InsecureRegistry": null,
    "RegistryMirror": [],
    "HostOnlyCIDR": "192.168.59.1/24",
    "HypervVirtualSwitch": "",
    "HypervUseExternalSwitch": false,
    "HypervExternalAdapter": "",
    "KVMNetwork": "default",
    "KVMQemuURI": "qemu:///system",
    "KVMGPU": false,
    "KVMHidden": false,
    "KVMNUMACount": 1,
    "APIServerPort": 53295,
    "DockerOpt": null,
    "DisableDriverMounts": false,
    "NFSShare": [],
    "NFSSharesRoot": "/nfsshares",
    "UUID": "",
    "NoVTXCheck": false,
    "DNSProxy": false,
    "HostDNSResolver": true,
    "HostOnlyNicType": "virtio",
    "NatNicType": "virtio",
    "SSHIPAddress": "",
    "SSHUser": "root",
    "SSHKey": "",
    "SSHPort": 22,
    "KubernetesConfig": {
        "KubernetesVersion": "v1.28.3",
        "ClusterName": "minikube",
        "Namespace": "default",
        "APIServerName": "minikubeCA",
        "APIServerNames": null,
        "APIServerIPs": null,
        "DNSDomain": "cluster.local",
        "ContainerRuntime": "docker",
        "CRISocket": "",
        "NetworkPlugin": "cni",
        "FeatureGates": "",
        "ServiceCIDR": "10.96.0.0/12",
        "ImageRepository": "",
        "LoadBalancerStartIP": "",
        "LoadBalancerEndIP": "",
        "CustomIngressCert": "",
        "RegistryAliases": "",
        "ExtraOptions": null,
        "ShouldLoadCachedImages": true,
        "EnableDefaultCNI": false,
        "CNI": "",
        "NodeIP": "",
        "NodePort": 8443,
        "NodeName": ""
    },
    "Nodes": [
        {
            "Name": "",
            "IP": "10.0.2.15",
            "Port": 8443,
            "KubernetesVersion": "v1.28.3",
            "ContainerRuntime": "docker",
            "ControlPlane": true,
            "Worker": true
        }
    ],
    "Addons": {
        "default-storageclass": true,
        "storage-provisioner": true
    },
    "CustomAddonImages": null,
    "CustomAddonRegistries": null,
    "VerifyComponents": {
        "apiserver": true,
        "system_pods": true
    },
    "StartHostTimeout": 360000000000,
    "ScheduledStop": null,
    "ExposedPorts": [],
    "ListenAddress": "",
    "Network": "socket_vmnet",
    "Subnet": "",
    "MultiNodeRequested": false,
    "ExtraDisks": 0,
    "CertExpiration": 94608000000000000,
    "Mount": false,
    "MountString": "/Users:/minikube-host",
    "Mount9PVersion": "9p2000.L",
    "MountGID": "docker",
    "MountIP": "",
    "MountMSize": 262144,
    "MountOptions": [],
    "MountPort": 0,
    "MountType": "9p",
    "MountUID": "docker",
    "BinaryMirror": "",
    "DisableOptimizations": false,
    "DisableMetrics": false,
    "CustomQemuFirmwarePath": "",
    "SocketVMnetClientPath": "",
    "SocketVMnetPath": "",
    "StaticIP": "",
    "SSHAuthSock": "",
    "SSHAgentPID": 0,
    "AutoPauseInterval": 60000000000,
    "GPUs": ""
}

(base) ➜  harness ifconfig vmenet0
vmenet0: flags=8963<UP,BROADCAST,SMART,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500
    ether 22:07:b4:23:db:7f
    media: autoselect
    status: active

Attach the log file

logs.txt

Operating System

macOS (Default)

Driver

QEMU

pascalrobert commented 5 months ago

Deleting the cluster and recreating it fix the issue (minikube delete --profile=minikube; minikube start --profile=minikube). But it will be create to be able to fix this without having to delete the cluster. The main difference is in the profile, befoe the node was:

    "Nodes": [
        {
            "Name": "",
            "IP": "10.0.2.15",
            "Port": 8443,
            "KubernetesVersion": "v1.28.3",
            "ContainerRuntime": "docker",
            "ControlPlane": true,
            "Worker": true
        }
    ],

now, it's:

    "Nodes": [
        {
            "Name": "",
            "IP": "192.168.105.2",
            "Port": 8443,
            "KubernetesVersion": "v1.28.3",
            "ContainerRuntime": "docker",
            "ControlPlane": true,
            "Worker": true
        }
    ],
k8s-triage-robot commented 2 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 1 month ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 3 days ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 3 days ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes/minikube/issues/18640#issuecomment-2346063954): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.