Closed robrich closed 3 years ago
@robrich thank you for reporting this. that seems like a bug ! meanwhile I believe u should still be able to get arround it by passing the actual version.
I will make sure we fix this in our next release
Problem seems to be driver independent. It happens when you try start a cluster while there is already existing one, regardless if it's running or stopped.
[radek@c8k20 ~]$ ./minikube-linux-amd64 status
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
[radek@c8k20 ~]$ ./minikube-linux-amd64 --v=4 --driver=docker --kubernetes-version=stable start
* minikube v1.10.1 on Centos 8.1.1911
* Using the docker driver based on existing profile
E0517 18:23:37.548452 14231 start.go:988] Error parsing old version "stable": No Major.Minor.Patch elements found
E0517 18:23:37.548857 14231 start.go:988] Error parsing old version "stable": No Major.Minor.Patch elements found
* Starting control plane node minikube in cluster minikube
* Updating the running docker "minikube" container ...
* Preparing Kubernetes v1.18.2 on Docker 19.03.2 ...
- kubeadm.pod-network-cidr=10.244.0.0/16
*
* [INVALID_KUBERNETES_VERSION] Failed to update cluster kubeadm images: semver: No Major.Minor.Patch elements found
* Suggestion: Specify --kubernetes-version in v<major>.<minor.<build> form. example: 'v1.1.14'
IMHO we should check if there is existing cluster and if there is, we should refuse to start.
Problem seems to be driver independent.
+1. Can reproduce with Docker and Virtualbox.
Idk how minikube manages the default value, but per the doc
--kubernetes-version='': The Kubernetes version that the minikube VM will use (ex: v1.2.3, 'stable' for v1.18.2, 'latest' for v1.18.3-beta.0). Defaults to 'stable'.
IMO using the flag with the default value and not using the flag (so let minikube use the default value by itself) should have the same behaviour.
Per testing, not specifying --kubernetes-version
doesn't behave the same as specifying it.
With update available
With default kubernetes-version (stable)
Only use stable
and latest
as dynamic version aliases and replaced them with the current version when performing the action: so minikube start --kubernetes-version=stable
becomes minikube start --kubernetes-version=v1.18.2
(current latest).
In case of version missmatch between existing cluster and specified version (stable beeing an alias to current stable, dynamically changing in time) perform the upgrade.
IMO the only matter of using stable
instead of a specific version is to always have the latest version under the stable
tag instead of manually performing the updates.
Btw bug also exists with latest
alias
I believe this was fixed ? @robrich @xAt0mZ do you mind verifying if this issue was solved by latest verison of minikube?
@medyagh I confirm this issue is solved on kubernetes 1.12.2 :rocket:
~> minikube version
minikube version: v1.12.2
commit: be7c19d391302656d27f1f213657d925c4e1cfc2
Excellent, closing this.
Steps to reproduce the issue:
... --kubernetes-version=stable
command.minikube start --vm-driver hyperv --kubernetes-version=stable
minikube start --vm-driver hyperv --kubernetes-version=v1.18.2
Full output of failed command:
Full output of
minikube start
command used, if not already included:Optional: Full output of
minikube logs
command: