kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.06k stars 4.86k forks source link

Multiple profiles cannot share docker network #14799

Closed zarenner closed 1 year ago

zarenner commented 2 years ago

What Happened?

While --network allows selecting a specific docker network, attempting to attach multiple clusters to that network fails.

Repro steps:

  1. minikube start -p test1 --driver docker --network sharednetwork
  2. test1 profile is created successfully. docker network list shows new sharednetwork network.
  3. minikube start -p test2 --driver docker --network sharednetwork -v=1 --alsologtostderr
  4. 🔥  Creating docker container (CPUs=2, Memory=8000MB) ...
    ❌  Exiting due to GUEST_PROVISION: Failed to start host: can't create with that IP, address already in use

    (see logs below for details)

Looking at https://github.com/kubernetes/minikube/blob/879c592e02668cd85334fce9b28c6ac6b6fb8780/pkg/drivers/kic/kic.go#L98-L107 and https://github.com/kubernetes/minikube/blob/879c592e02668cd85334fce9b28c6ac6b6fb8780/pkg/minikube/driver/driver.go#L371-L384 it seems the logic to determine the IP address is based on the node index and doesn't consider multiple profiles.

It seems like the logic should be profile-aware, or otherwise allow a different IP to be chosen that doesn't match the node index (e.g. exclude IPs already used in the network and take next available). Or perhaps there's a better alternative to communicate across clusters / this scenario isn't supported and a better error message should be emitted.

Workarounds I've found to communicate across clusters:

Attach the log file

 I0816 00:55:26.987117   17582 cli_runner.go:133] Run: docker network inspect sharednetwork --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
    I0816 00:55:27.019030   17582 network_create.go:67] Found existing network {name:sharednetwork subnet:0xc000f782d0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
    I0816 00:55:27.019099   17582 kic.go:106] calculated static IP "192.168.49.2" for the "test2" container
    I0816 00:55:27.019224   17582 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
    I0816 00:55:27.049742   17582 cli_runner.go:133] Run: docker volume create test2 --label name.minikube.sigs.k8s.io=test2 --label created_by.minikube.sigs.k8s.io=true
    / I0816 00:55:27.119180   17582 oci.go:102] Successfully created a docker volume test2
    I0816 00:55:27.119296   17582 cli_runner.go:133] Run: docker run --rm --name test2-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=test2 --entrypoint /usr/bin/test -v test2:/var gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -d /var/lib
    - I0816 00:55:28.492667   17582 cli_runner.go:186] Completed: docker run --rm --name test2-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=test2 --entrypoint /usr/bin/test -v test2:/var gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -d /var/lib: (1.37330149s)
    I0816 00:55:28.492720   17582 oci.go:106] Successfully prepared a docker volume test2
    I0816 00:55:28.492759   17582 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime docker
    I0816 00:55:28.492790   17582 kic.go:179] Starting extracting preloaded images to volume ...
    I0816 00:55:28.492890   17582 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/zarenner/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v test2:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -I lz4 -xf /preloaded.tar -C /extractDir
    \ I0816 00:55:38.583125   17582 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/zarenner/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v test2:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -I lz4 -xf /preloaded.tar -C /extractDir: (10.090146555s)
    I0816 00:55:38.583181   17582 kic.go:188] duration metric: took 10.090387 seconds to extract preloaded images to volume
    W0816 00:55:38.583237   17582 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
    W0816 00:55:38.583257   17582 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
    I0816 00:55:38.583348   17582 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
    | I0816 00:55:38.664554   17582 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname test2 --name test2 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=test2 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=test2 --network sharednetwork --ip 192.168.49.2 --volume test2:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2
    - W0816 00:55:38.845040   17582 cli_runner.go:180] docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname test2 --name test2 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=test2 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=test2 --network sharednetwork --ip 192.168.49.2 --volume test2:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 returned with exit code 125
    ❌  Exiting due to GUEST_PROVISION: Failed to start host: can't create with that IP, address already in use 

Operating System

Ubuntu

Driver

Docker

RA489 commented 2 years ago

/kind support

zarenner commented 2 years ago

Turns out manually creating/connecting an additional shared network is NOT an option, due to the container addresses should have 2 values, got 3 values check: https://github.com/kubernetes/minikube/blob/879c592e02668cd85334fce9b28c6ac6b6fb8780/pkg/drivers/kic/oci/network.go#L227-L244

That seems to leave using the host to proxy traffic between nodes as perhaps the only working method to communicate across clusters?

medyagh commented 1 year ago

zarenner I think you might have found a case that we dont have support for it,

by design separate profiles (clusters) are totally separate cluster, do you mind sharing why you would want separate clusters to share the same networking ?

this might be an easy fix if you or anyone wants to contribute to minikube.

the logic of creating networks for docker driver can be found in this package minikube/pkg/drivers/kic/oci/network_create.go https://github.com/medyagh/minikube/blob/879c592e02668cd85334fce9b28c6ac6b6fb8780/pkg/drivers/kic/oci/network_create.go#L17

zarenner commented 1 year ago

@medyagh We were implementing a gateway / proxying service that fronts multiple clusters. We wanted our minikube-based dev environment to roughly match production in this manner, hence bridging/sharing their networks in some way.

In the end we decided instead to use separate ingress gateways, namespaces, etc to mimic multiple clusters within a single minikube cluster. Less resource intensive anyways and still sufficient so far for our needs.

As such, at least in the short term I think it's unlikely that I'll get around to adding support for this - feel free to close this issue if desired.

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 1 year ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes/minikube/issues/14799#issuecomment-1435812582): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.