Closed roy-work closed 3 months ago
Even if I attempt to race minikube start
& chown docker: โฆ
while it is running, it still fails with:
๐คฆ StartHost failed, but will try again: creating host: create: provisioning: Temporary Error: sudo mkdir -p /etc/docker /etc/docker /etc/docker: Process exited with status 1
stdout:
stderr:
sudo: /usr/bin/sudo must be owned by uid 0 and have the setuid bit set
Indeed, sudo
does not have the setuid bit set.
I eventually found minikube delete
takes flags?!
ยป minikube delete --all --purge
๐ฅ Successfully deleted all profiles
๐ Successfully purged minikube directory located at - [/home/roy/.minikube]
๐ Kicbase images have not been deleted. To delete images run:
โช docker rmi gcr.io/k8s-minikube/kicbase:v0.0.42
I also ran the docker rmi
.
That worked.
I still have no idea why that worked, or even how I got into this state in the first place.
โฆwhy does a normal unadorned minikube delete
say,
๐ Removed all traces of the "minikube" cluster.
โฆ when that's clearly not true?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
What Happened?
The logs are full of:
I can't SSH either (the port here is the port from
docker ps
),The reason
minikube
can't SSH is that theauthorized_keys
file is unusable, due to bad permissions; we can see this if wedocker exec
into the container:~docker
should be owned bydocker
, not byroot
, and similarly,~docker/.ssh
should too.sshd
simply can't get toauthorized_keys
.If we correct that:
Then we're able to SSH:
But why is
~docker
inside the container so messed up to begin with?Attach the log file
minikube logs
failsminikube.log minikube_logs_05c6d87097f2294684e6847624a5fb0bff018ece_0.log
Operating System
Ubuntu
Driver
None