Closed B1tVect0r closed 1 year ago
The current setup only supports tcp or ssh in the DOCKER_HOST, it doesn't support the scenario where the socket is tunneled or mounted from another machine.
Possibly there could be a way (env var?) to pass the remote host, even when using a Unix socket. Before assuming it is publishing at localhost
Possibly there could be a way (env var?) to pass the remote host, even when using a Unix socket
This is what I'm after; I believe I should be able to resolve the remote host both from the context that I'm running minikube start
and from the resulting container, so presumably if it were dialing 192.168.65.2:{minikube port}
(or whatever the remote host ends up resolving to) rather than localhost
it would at least get further than it is (not sure if that would be sufficient to get it completely across the finish line, though)
Something else looks broken on that docker host:
tar (child): /preloaded.tar: Cannot read: Is a directory
tar (child): At beginning of tape, quitting now
Looks like bind mounts aren't working, in your setup.
-v /home/vscode/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro
Since this will try to find the file on the docker host, and fails.
Docker falls back to just creating an empty dir, and mounting that.
Should look for a remote host, and skip preload in that case.
I think the cache will fail in similar ways, "docker load" won't find it.
EDIT: The cache will still work, since the load is done by the client
It would be possible to do a similar workaround, also for the preload.
Thought I'd add a little extra evidence here. We're noticing something strange when bind mounting our docker.sock file in different ways within our vscode devcontainer (I don't believe the issue is one with vscode's devcontainer setup, just adding that the below is configuration from one of our devcontainer.json files)
"mounts": [
"source=/var/run/docker.sock,target=/var/run/docker.sock,type=bind"
]
vs
"runArgs": [
"-v=/var/run/docker.sock/:/var/run/docker.sock"
]
The former causes Minikube to not start up successfully (with a host timeout error I believe) and leverages the docker --mounts flag under the hood.
For the latter, Minikube does indeed start successfully when we manually use the --volume
flag.
I haven't tested taking the vscode abstraction out of the picture, but I imagine we would see the same thing.
Docker calls out some subtle differences between the two, but I haven't noticed any differences between the mounts from one example to the other.
From the Docker documentation:
Differences between “--mount” and “--volume” The --mount flag supports most options that are supported by the -v or --volume flag for docker run, with some important exceptions:
The --mount flag allows you to specify a volume driver and volume driver options per volume, without creating the volumes in advance. In contrast, docker run allows you to specify a single volume driver which is shared by all volumes, using the --volume-driver flag.
The --mount flag allows you to specify custom metadata (“labels”) for a volume, before the volume is created.
When you use --mount with type=bind, the host-path must refer to an existing path on the host. The path will not be created for you and the service will fail with an error if the path does not exist.
The --mount flag does not allow you to relabel a volume with Z or z flags, which are used for selinux labeling.
If I can provider further information, let me know. We would really like to use the mounts section for vscode devcontainers for bind mounting our docker socket as it gets around some annoying permissions issues and the --mount flag is recommended over --volume by Docker.
The current code doesn't know about remote servers with local sockets, it assumes that remote servers use tcp and local servers use unix...
It should probably have a Boolean to override, similar to allowing both docker engine and docker desktop (on Linux, that is)
For the use case of development containers, we try to avoid the docker-in-docker situation and instead just install the docker cli and bind mount the docker.sock
. We also mount ~/.kube/config
and ~/.minikube
which allows us to communicate to the same minikube cluster from different dev containers.
Uses cases and specifics aside, minikube successfully starts within a devcontainer that uses --volume
to bind mount the docker.sock but not when using --mount
. When I try to start minikube with more verbose logging inside of the devcontainer that uses --mount
to bind mount the docker.sock see a similar output to the OP.
interestingly I didn't realize I had a trailing slash on my docker.sock --volume mount (definitely work)... when I removed that trailing slash it stopped working so this works:
"runArgs": [
"-v=/var/run/docker.sock/:/var/run/docker.sock"
]
but this does not:
"runArgs": [
"-v=/var/run/docker.sock:/var/run/docker.sock"
]
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
@B1tVect0r did you find a solution?
The current code doesn't know about remote servers with local sockets, it assumes that remote servers use tcp and local servers use unix...
It should probably have a Boolean to override, similar to allowing both docker engine and docker desktop (on Linux, that is)
@afbjorklund this issue was closed but we don't have a solution :(
My solution is:
export DOCKER_HOST="tcp://$(dig +short host.docker.internal):2375"
(as Docker Desktop)--listen-address=0.0.0.0
minikube tunnel --alsologtostderr --v=2
, we received
ssh: connect to host 127.0.0.1 port ####: Connection refused
because our ssh target address is dig +short host.docker.internal
solution should be: fix this line
https://github.com/kubernetes/minikube/blob/6bdc0f1506a4fcded5216d96003ae549394232ab/pkg/minikube/tunnel/kic/ssh_conn.go#L52
with same logics as this
https://github.com/kubernetes/minikube/blob/6bdc0f1506a4fcded5216d96003ae549394232ab/pkg/minikube/sshutil/sshutil.go#L85
my workaround is (Unfortunately, I can’t propose PR, due to my low level of golang)
func createSSHConn(name, sshPort, sshKey, bindAddress string, resourcePorts []int32, resourceIP string, resourceName string) *sshConn {
driver := registry.Driver(driver.Docker)
sshConnUserHost := "docker@127.0.0.1"
if !driver.Empty() {
kic := driver.Init()
// sshConn
ip, err := kic.GetSSHHostname()
if err == nil {
sshConnUserHost = kic.GetSSHUsername() + "@" + ip
}
}
// extract sshArgs
sshArgs := []string{
// TODO: document the options here
"-o", "UserKnownHostsFile=/dev/null",
"-o", "StrictHostKeyChecking=no",
"-o", "IdentitiesOnly=yes",
"-N",
sshConnUserHost,
"-p", sshPort,
"-i", sshKey,
}
_but you can use NodePort as well_
What Happened?
Running
minikube start --driver=docker
from inside a container that has the host docker socket mounted (i.e., Docker-from-Docker setup) fails to complete. The Docker container starts properly and is visible from both the host and the container but theminikube start
process fails to proceed as it seems to be hard-coded to dial127.0.0.1:{minikube container port}
for some other post-container-start process. I've tried fiddling with all of the IP-related options for theminikube start
command that I can find to no avail. Is there a way to do this that I'm missing?Attach the log file
Operating System
Other
Driver
Docker