Closed JesusGoVar closed 4 years ago
Hey @JesusGoVar
I've never heard of MiniKF, but can give some basic support based on the error message emitted:
X Error restarting cluster: waiting for apiserver: timed out waiting for the condition
This restart issue has been addressed in more recent minikube releases.
If you are unable to upgrade to minikube v1.9.2, I'm confident that running minikube delete
in this environment will get you through. minikube logs
will help you debug the underlying cause.
Let me know how it goes!
I'm closing this issue as it hasn't seen activity in awhile, and it's unclear if this issue still exists. If this issue does continue to exist in the most recent release of minikube, please feel free to re-open it by replying /reopen
If someone sees a similar issue to this one, please re-open it as replies to closed issues are unlikely to be noticed.
Thank you for opening the issue!
I can't up de MiniKF on vagrant on windows 10. vagrant version
2.2.7
VirtualBox version 6.1.4 r136177 (Qt5.6.2)10.10.10.10 show this error
Type: CommandExecutionError Reason: Command
<ExtCommand [QGn0a_38L3s] `minikube start --v=7 --apiserver-name=10.10.10.10 --apiserver-port=8443 --apiserver-ips=10.10.10.10 --apiserver-names=MiniKF --extra-config=kubelet.v=1 --extra-config=kubeadm.ignore-preflight-errors=SystemVerification --extra-config=kubelet.node-ip=10.10.10.10 --extra-config=kubelet.feature-gates=VolumeSnapshotDataSource=true,CSINodeInfo=true,CSIDriverRegistry=true --extra-config=kubelet.image-gc-high-threshold=100 --extra-config=kubelet.image-gc-low-threshold=99 --extra-config=kubelet.eviction-hard=memory.available<10Mi,nodefs.available<10Mi,imagefs.available<10Mi --extra-config=kubelet.eviction-soft=memory.available<10Mi,nodefs.available<10Mi,imagefs.available<10Mi --extra-config=kubelet.eviction-soft-grace-period=memory.available=1h,nodefs.available=1h,imagefs.available=1h --extra-config=apiserver.feature-gates=VolumeSnapshotDataSource=true,CSINodeInfo=true,CSIDriverRegistry=true --extra-config=apiserver.advertise-address=10.10.10.10 --extra-config=apiserver.bind-address=0.0.0.0 --extra-config=apiserver.token-auth-file=/var/lib/minikube/certs/tokens --kubernetes-version=v1.14.3 --extra-config=controller-manager.feature-gates=VolumeSnapshotDataSource=true,CSINodeInfo=true,CSIDriverRegistry=true --extra-config=apiserver.service-account-issuer=api --extra-config=apiserver.service-account-signing-key-file=/var/lib/minikube/certs/sa.key --extra-config=apiserver.service-account-key-file=/var/lib/minikube/certs/sa.pub --extra-config=apiserver.service-account-api-audiences=api --vm-driver=none', status=FINISHED (ret: 70), PID=2730, shell=False>' failed. Error log: \nX Error restarting cluster: waiting for apiserver: timed out waiting for the condition\n\n* Sorry that minikube crashed. If this was unexpected, we would love to hear from you:\n - https://github.com/kubernetes/minikube/issues/new\nThe logfile
/home/vagrant/provision.log' may contain more information on this error, including a backtrace. /usr/local/bin/minikf-insist: line 27: 1607 Segmentation fault (core dumped) sudo /home/vagrant/provision.py "$@"
Error on /home/vagrant/provision.log
2020-04-09T02:28:58.154922-0700 provision.py pid=1476/tid=1476/pytid=140356987021056 cmdutils:829 [INFO] [nloimKJQkmY] Started command with PID 3419: docker inspect gcr.io/arrikto-playground/roke:v0.14-106-g3a338c1 gcr.io/arrikto-playground/rok-csi:v0.14-106-g3a338c1 gcr.io/arrikto-playground/rok-operator:v0.14-106-g3a338c1 gcr.io/arrikto-playground/rok-kmod:v0.14-106-g3a338c1 argoproj/argoexec:v2.3.0 argoproj/argoui:v2.3.0 argoproj/workflow-controller:v2.3.0 bitnami/minideb:latest bitnami/postgresql:10.6.0 bitnami/postgresql:10.6.0 istio/proxyv2:1.3.1 prom/prometheus:v2.8.0 seldonio/seldon-core-operator:1.0.1 gcr.io/arrikto/dexidp/dex:4bede5eb80822fc3a7fc9edca0ed2605cd339d17 gcr.io/arrikto-public/admission-webhook@sha256:0046484e6cd8d47d7d9a83b517392155d5e5dc73aa02b4ed8a832e37aac39ffa gcr.io/arrikto-public/kubeflow/admission-webhook:v1.0-56-g5ae311f8 gcr.io/arrikto-public/kubeflow/centraldashboard:v1.0-56-g5ae311f8 gcr.io/arrikto-public/kubeflow/jupyter-web-app:v1.0-56-g5ae311f8 gcr.io/arrikto-public/kubeflow/kfam:v1.0-56-g5ae311f8 gcr.io/arrikto-public/kubeflow/ml-pipeline/frontend:0.2.0-2-g0e506e28 gcr.io/arrikto-public/kubeflow/notebook-controller:v1.0-56-g5ae311f8 gcr.io/arrikto-public/kubeflow/oidc-authservice:d5c2981 gcr.io/arrikto-public/kubeflow/profile-controller:v1.0-56-g5ae311f8 gcr.io/arrikto-public/kubeflow/pvcviewer-controller:v1.0-56-g5ae311f8 gcr.io/arrikto-public/kubeflow/reception:v1.0-56-g5ae311f8 gcr.io/arrikto-public/kubeflow/volumes-viewer:v14.3.0-12-g79d7536b gcr.io/arrikto-public/kubeflow/volumes-web-app:v1.0-56-g5ae311f8 gcr.io/arrikto-public/startup-lock-init@sha256:0fbe996a2f6b380d7c566ba16255ec034faec983c2661da778fe09b3e744ad21 gcr.io/arrikto-public/startup-lock@sha256:90e7d2dbbdbe2ff3602f80b90f5cfa924e8bcabc65b98a1fd4e50d7e590ad4cb gcr.io/arrikto-public/tensorflow-1.15.2-notebook-cpu:1.0.0.arr1 gcr.io/arrikto-public/tensorflow-1.14.0-notebook-cpu:kubecon-workshop gcr.io/arrikto-public/tensorflow-1.15.2-notebook-gpu:1.0.0.arr1 gcr.io/google_containers/spartakus-amd64:v1.1.0 gcr.io/istio-release/citadel:release-1.3-latest-daily gcr.io/istio-release/galley:release-1.3-latest-daily gcr.io/istio-release/kubectl:release-1.3-latest-daily gcr.io/istio-release/mixer:release-1.3-latest-daily gcr.io/istio-release/node-agent-k8s:release-1.3-latest-daily gcr.io/istio-release/pilot:release-1.3-latest-daily gcr.io/istio-release/proxy_init:release-1.3-latest-daily gcr.io/istio-release/proxyv2:release-1.3-latest-daily gcr.io/istio-release/sidecar_injector:release-1.3-latest-daily gcr.io/k8s-minikube/storage-provisioner:v1.8.1 gcr.io/kfserving/kfserving-controller:0.2.2 gcr.io/knative-releases/knative.dev/serving/cmd/activator@sha256:8e606671215cc029683e8cd633ec5de9eabeaa6e9a4392ff289883304be1f418 gcr.io/knative-releases/knative.dev/serving/cmd/autoscaler-hpa@sha256:5e0fadf574e66fb1c893806b5c5e5f19139cc476ebf1dff9860789fe4ac5f545 gcr.io/knative-releases/knative.dev/serving/cmd/autoscaler@sha256:ef1f01b5fb3886d4c488a219687aac72d28e72f808691132f658259e4e02bb27 gcr.io/knative-releases/knative.dev/serving/cmd/controller@sha256:5ca13e5b3ce5e2819c4567b75c0984650a57272ece44bc1dabf930f9fe1e19a1 gcr.io/knative-releases/knative.dev/serving/cmd/networking/istio@sha256:727a623ccb17676fae8058cb1691207a9658a8d71bc7603d701e23b1a6037e6c gcr.io/knative-releases/knative.dev/serving/cmd/webhook@sha256:1ef3328282f31704b5802c1136bd117e8598fd9f437df8209ca87366c5ce9fcb gcr.io/kubebuilder/kube-rbac-proxy:v0.4.0 gcr.io/kubeflow-images-public/katib/v1alpha3/katib-controller:v0.8.0 gcr.io/kubeflow-images-public/katib/v1alpha3/katib-db-manager:v0.8.0 gcr.io/kubeflow-images-public/katib/v1alpha3/katib-ui:v0.8.0 gcr.io/kubeflow-images-public/kubernetes-sigs/application:1.0-beta gcr.io/kubeflow-images-public/metadata-frontend:v0.1.8 gcr.io/kubeflow-images-public/metadata:v0.1.11 gcr.io/kubeflow-images-public/pytorch-operator:v1.0.0-g047cf0f gcr.io/kubeflow-images-public/tf_operator:v1.0.0-g92389064 gcr.io/ml-pipeline/api-server:0.2.0 gcr.io/ml-pipeline/envoy:metadata-grpc gcr.io/ml-pipeline/ml-pipeline-dataflow-tfdv:6ad2601ec7d04e842c212c50d5c78e548e12ddea gcr.io/ml-pipeline/ml-pipeline-dataflow-tfma:6ad2601ec7d04e842c212c50d5c78e548e12ddea gcr.io/ml-pipeline/ml-pipeline-dataflow-tf-predict:6ad2601ec7d04e842c212c50d5c78e548e12ddea gcr.io/ml-pipeline/ml-pipeline-dataflow-tft:6ad2601ec7d04e842c212c50d5c78e548e12ddea gcr.io/ml-pipeline/ml-pipeline-kubeflow-deployer:727c48c690c081b505c1f0979d11930bf1ef07c0 gcr.io/ml-pipeline/ml-pipeline-kubeflow-tf-trainer:5df2cdc1ed145320204e8bc73b59cdbd7b3da28f gcr.io/ml-pipeline/ml-pipeline-local-confusion-matrix:5df2cdc1ed145320204e8bc73b59cdbd7b3da28f gcr.io/ml-pipeline/ml-pipeline-local-roc:5df2cdc1ed145320204e8bc73b59cdbd7b3da28f gcr.io/ml-pipeline/persistenceagent:0.2.0 gcr.io/ml-pipeline/scheduledworkflow:0.2.0 gcr.io/ml-pipeline/viewer-crd-controller:0.2.0 gcr.io/ml-pipeline/visualization-server:0.2.0 gcr.io/spark-operator/spark-operator:v1beta2-1.0.0-2.4.4 gcr.io/tfx-oss-public/ml_metadata_store_server:v0.21.1 k8s.gcr.io/coredns:1.3.1 k8s.gcr.io/etcd:3.3.10 k8s.gcr.io/kube-addon-manager:v9.0 k8s.gcr.io/kube-apiserver:v1.14.3 k8s.gcr.io/kube-controller-manager:v1.14.3 k8s.gcr.io/kube-proxy:v1.14.3 k8s.gcr.io/kube-scheduler:v1.14.3 minio/minio:RELEASE.2018-02-09T22-40-05Z mysql:5.6 mysql:8 mysql:8.0.3 nvidia/k8s-device-plugin:1.0.0-beta4 quay.io/coreos/etcd:v3.3.13 quay.io/jetstack/cert-manager-cainjector:v0.11.0 quay.io/jetstack/cert-manager-controller:v0.11.0 quay.io/jetstack/cert-manager-webhook:v0.11.0 tensorflow/serving:1.11.1 tensorflow/tensorflow:1.8.0 2020-04-09T02:28:58.305871-0700 provision.py pid=1476/tid=3420/pytid=140356864358144 cmdutils:536 [INFO] [nloimKJQkmY] Command exited with ret: 2 2020-04-09T02:28:58.311995-0700 provision.py pid=1476/tid=1476/pytid=140356987021056 cli:149 [CRITICAL] An unexpected exception has occured 2020-04-09T02:28:58.314990-0700 provision.py pid=1476/tid=1476/pytid=140356987021056 cli:150 [CRITICAL] Traceback (most recent call last):\n File "build/bdist.linux-x86_64/egg/rok_tasks/frontend/cli.py", line 131, in run\n File "/home/vagrant/provision.py", line 3925, in main\n provision_images(arrikto_registry, arrikto_repo, arrikto_tag)\n File "/home/vagrant/provision.py", line 692, in wrapper\n ret = func(*args, **kwargs)\n File "/home/vagrant/provision.py", line 3424, in provision_images\n existing = set(ref for img in json.loads(cmd.out)\n File "/usr/lib/python2.7/json/__init__.py", line 339, in loads\n return _default_decoder.decode(s)\n File "/usr/lib/python2.7/json/decoder.py", line 364, in decode\n obj, end = self.raw_decode(s, idx=_w(s, 0).end())\n File "/usr/lib/python2.7/json/decoder.py", line 382, in raw_decode\n raise ValueError("No JSON object could be decoded")\nValueError: No JSON object could be decoded\n