k3d-io / k3d

Little helper to run CNCF's k3s in Docker
https://k3d.io/
MIT License
5.45k stars 461 forks source link

How to access services on host machine? #101

Closed zombor closed 4 years ago

zombor commented 5 years ago

Hi, I'm running k3d using Docker for Mac and i need to access a service running on the host machine (the mac). From inside a pod I can ping the dhcp address of the host, but of course this can change.

Is there a static way to access the host machine?

Alternatively, I need to do this in order to access custom dns that is running on the host machine. If the dns settings of the host propagated to pods (like it does for containers running in regular docker), that would also solve my problem.

inercia commented 4 years ago

I think most of the current solutions at the moment would require some kind of support from the environment. For example, you could use a Service with an externalName, like:

kind: Service
apiVersion: v1
metadata:
 name: the-host
spec:
 type: ExternalName
 externalName: ds149763.mlab.com

but this would require your cluster can resolve the externalName name. We could also expose the host's IP with a Service/Endpoint, but that would require a constant IP and, as you mentioned, it could change...

luisdavim commented 4 years ago

Normally using host.docker.internal should work to access the host on a mac or windows machine but I've tried it and it doesn't work. If I docker exec into the container running the k3d cluster I can telnet to host.docker.internal but if I kubectl exec into a pod running in my cluster I get bad address 'host.docker.internal' when trying to reach the service on my mac.

iwilltry42 commented 4 years ago

Normally using host.docker.internal should work to access the host on a mac or windows machine but I've tried it and it doesn't work. If I docker exec into the container running the k3d cluster I can telnet to host.docker.internal but if I kubectl exec into a pod running in my cluster I get bad address 'host.docker.internal' when trying to reach the service on my mac.

I guess that most pods use ClusterFirst DNS and don't rely on the "host network" (host name resolution of the underlying k3d/k3s container). You may want to try changing it to ClusterFirstWithHostNet (requires pod to run with hostnetwork access) or any different setting.

luisdavim commented 4 years ago

Is there a way to make the cluster DNS resolve the same way as the node? I have some external resources that I can reach from the k3d node container but from pods in the cluster the names don't resolve.

iwilltry42 commented 4 years ago

@luisdavim , CoreDNS is quite flexible, so I guess you can tweak it in any way you like. E.g. you can make it use a hosts file (https://coredns.io/plugins/hosts/) or forward requests to upstream DNS (https://coredns.io/manual/toc/#forwarding)

drgnkpr commented 4 years ago

Is it possible to add host.docker.internal to NodeHosts in coredns configmap?

bitjson commented 4 years ago

I think I have the same question – I'm connecting a Hasura deployment to a managed Postgres database which is not running inside the Kubernetes cluster.

To test locally (also on macOS and using Docker for Mac), I'd like the Hasura pods to be able to connect to a native Postgres DB running on the host. (I can connect to an internal Postgres instance, but the non-native performance is much worse.)

To verify I can connect, I have a pod running a dpage/pgadmin4 image, and I'm using kubectl exec -it pgadmin-[...] sh to use the pg_isready utility. I'm sure the host DB is listening to all connections (on IPv4 address "0.0.0.0", port 5432), so I'm fairly sure the problem is between the cluster Docker containers and the pod.

I've tried:

I'm not sure if I'm even approaching the problem correctly – can anyone recommend a way to expose a host-running Postgres instance to pods inside a k3d cluster?

bitjson commented 4 years ago

Does k3d include something comparable to host.minikube.internal on Minikube?

morinap commented 4 years ago

@bitjson I'm no longer using Minikube so I can't speak as to whether it's been updated in more recent versions, but the way I tackled this when I had the problem was actually to create an alias to my lo interface on OS X that used an IP space that was not in conflict with anything else I used and then reference that from my kube pods.

I created a plist file like this and loaded it:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC -//Apple Computer//DTD PLIST 1.0//EN http://www.apple.com/DTDs/PropertyList-1.0.dtd >
<plist version="1.0">
  <dict>
    <key>Label</key>
    <string>com.user.lo0-loopback</string>
    <key>ProgramArguments</key>
    <array>
      <string>/sbin/ifconfig</string>
      <string>lo0</string>
      <string>alias</string>
      <string>192.168.98.1</string>
      <string>255.255.255.0</string>
    </array>
    <key>RunAtLoad</key> <true/>
    <key>Nice</key>
    <integer>10</integer>
    <key>KeepAlive</key>
    <false/>
    <key>AbandonProcessGroup</key>
    <true/>
    <key>StandardErrorPath</key>
    <string>/var/log/loopback-alias.log</string>
    <key>StandardOutPath</key>
    <string>/var/log/loopback-alias.log</string>
  </dict>
</plist>

Then, from my pods, I just referenced 192.168.98.1 to hit services on my OS X host.

Hope this helps.

bitjson commented 4 years ago

Hey @morinap – thanks for the response – to clarify, did you use this solution for accessing host machine services from a k3d cluster? Are pods in your cluster able to connect to 192.168.98.1 after you load the plist file?

I'd prefer a cross-platform solution, but even macOS-only would be a good start. 👍

morinap commented 4 years ago

@bitjson Yes, that's correct - this was accessing host machine services from k3d (I think I had a minor brain freeze, not sure why I mentioned Minikube). Pods in my cluster could connect to 192.168.98.1 because it's not within the scope of any subnets in the cluster, so it is routed out of the default gateway, which happens to be managed by the host machine, which recognizes that address as an alias to its own interface and routes it accordingly.

Definitely not a 100% ideal solution but it served as a nice stopgap for me.

whazor commented 4 years ago

Like said above, for me editing the CoreDNS config map worked.

Run: kubectl -n kube-system edit configmap coredns

You will find the following section:

  NodeHosts: |
    172.30.0.3 k3d-k3s-default-server-0

In my case I add 172.30.0.1 host.docker.internal, check the IP and make sure the ip ends with: .1

blaggacao commented 4 years ago

@iwilltry42 would @Whazor solution be a candiate implementation for #350 to consider?

bitjson commented 4 years ago

Thank you @morinap and @Whazor for your help! I ended up trying both ways, and I think I've settled on something that works for now.

@Whazor I had trouble getting the additional host.docker.internal line in NodeHosts working – I tried restarting my pod(s) and coredns, but the pods still weren't routing the new IP address. Is that the only configuration change you made? (Any recommendations for how to debug coredns configuration?)

For now I'm manually resolving the host IP during local deployments using dig in a temporary docker container:

helm upgrade --install release-name charts/project-name
--set postgres.externalDbUrl=postgres://user:very_insecure_postgres_password@$(docker run --rm toolbelt/dig@sha256:a39b94e87ffe3774fc37dbffab642b2817467ffa57852f740ba3eccf41afca9f +short host.docker.internal | tail -n1 | tr -d '\n'):5432/postgres

Anyone know when or how often Docker for Mac changes the resolution of host.docker.internal? It seems to survive restarts, so it's at least permanent enough for my use case.

(Though I'd still love to be able to configure coredns to resolve host.docker.internal (or maybe host.k3d.internal) without manually setting IP addresses in scripts.)

whazor commented 4 years ago
  NodeHosts: |
    172.30.0.3 k3d-k3s-default-server-0
    172.30.0.1 host.docker.internal
    172.30.0.1 registry.local

This is how it look for me right now. I did restart the coredns pod (basically deleting it), but this was not needed. I did notice that while my pods can use the DNS, kubernetes itself does not access this host. So it is not possible to pull an image from host.docker.internal or registry.local, so I used k3d image import to copy the images with the correct url.

The IP's are different per kubernetes cluster, so make sure you use the correct IP. You can double check the IP with traceroute and you can debug via dig. With dig you do dig host.docker.internal 10.43.0.10 (I think that should be the kube-dns IP address).

iwilltry42 commented 4 years ago

@bitjson

To test locally (also on macOS and using Docker for Mac), I'd like the Hasura pods to be able to connect to a native Postgres DB running on the host. (I can connect to an internal Postgres instance, but the non-native performance is much worse.)

While this has nothing to do with this issue, I'm curious to know, which performance issues you experience? In general, there shouldn't be much of a problem (we're running Postgres in Kubernetes in production at work). But it may certainly be, that the Docker VM on MacOS is limiting the resources too much.

* `kubectl port-forward` – I need to forward the host `localhost:5432` into a particular set of pods – I don't think this is useful? (`kubectl port-forward` only allows me to quickly expose ports from pods to the host?)

* [via NodePort – Exposing Services](https://k3d.io/usage/guides/exposing_services/) – I tried several configurations here, but I'm also fairly sure this is for "outbound" services – is there some way to configure an "inbound" service which exposes a port from the host to the cluster? Maybe an `ExternalName` service using [`host.docker.internal`](https://docs.docker.com/docker-for-mac/networking/#use-cases-and-workarounds)? (I haven't figured out a working configuration yet.)

* `hostNetwork: true` and `dnsPolicy: ClusterFirstWithHostNet` – this might be a good solution, but I'm getting the error: `3 node(s) didn't have free ports for the requested pod ports.` (like #104) and I'm having trouble preventing the conflicts.

You're right there with your observations: all of those solutions are mostly for getting access from the outside into the cluster, not vice versa. We're sometimes using a combination of a Service and an Endpoint with a hardcoded IP to advertise cluster-external services inside the cluster, so that could be a workaround as well.

iwilltry42 commented 4 years ago

I will have a look now, how we can set some value for e.g. host.k3d.internal, while I dislike the idea of modifying CoreDNS, as this will most probably mean that we have to pull in some huge dependencies to be able to update the value everytime that the IP changes (which could be just after a restart of the host, if it's not a DHCP-static IP).

blaggacao commented 4 years ago

@iwilltry42 I think routing to the bridge gateway should be ok, it is a prerequisite that host services are not bound to the loopback interface but usually to 0.0.0.0 for this technique to work, anyways.

docker network inspect k3d-playground
[
    {
        "Name": "k3d-playground",
        "Id": "c4e13cfa59695935752d55a97398f38bf90fff40aac8308eb76ef771b24c4a8a",
        "Created": "2020-09-21T17:41:53.624705081-05:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.26.0.0/16",
                    "Gateway": "172.26.0.1"    # this one
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {},
        "Labels": {
            "app": "k3d"
        }
    }
]
iwilltry42 commented 4 years ago

@iwilltry42 I think routing to the bridge gateway should be ok, it is a prerequisite that host services are not bound to the loopback interface but usually to 0.0.0.0 for this technique to work, anyways.

docker network inspect k3d-playground
[
    {
        "Name": "k3d-playground",
        "Id": "c4e13cfa59695935752d55a97398f38bf90fff40aac8308eb76ef771b24c4a8a",
        "Created": "2020-09-21T17:41:53.624705081-05:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.26.0.0/16",
                    "Gateway": "172.26.0.1"    # this one
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {},
        "Labels": {
            "app": "k3d"
        }
    }
]

Yep, thanks :+1: That's exactly what I'm implementing right now, similar to the way Minikube does it :+1:

cscetbon commented 4 years ago

thanks @iwilltry42 for taking care of that issue. It's annoying ...

iwilltry42 commented 4 years ago

Hey there :wave: Just giving an update on this: it took a little longer than usual, since I'm quite busy at work recently and yesterday I struggled setting up a proper Windows VM with nested virtualization to test the changes on Docker for Desktop. I guess I can get the PR up today and have it tested on Linux, Docker for Desktop (Windows 10 w/ Hyper-V) and Docker for Desktop (Windows 10 w/ WSL2). Stay tuned :grimacing:

iwilltry42 commented 4 years ago

So by now, we're successfully adding that entry to the hosts' (i.e. k3d nodes') /etc/hosts and it works fine on all systems (using different approaches). However, this does not really have an effect on the pods, since they're not using the host's /etc/hosts :confounded: I'm now looking into a way to re-use k3s' built-in NodeHosts configmap for CoreDNS to achieve the desired outcome.

cscetbon commented 4 years ago

@iwilltry42 can you confirm it should allow a pod from cluster A to connect to api server in cluster B for instance ?

blaggacao commented 4 years ago

@cscetbon Trying to invert the burden of proof to keep Thorsten's hands free on the stern of #360: Where would you suggest it couldn't work?

bitjson commented 4 years ago

https://github.com/rancher/k3d/issues/101#issuecomment-697653547

[...] I'm curious to know, which performance issues you experience? In general, there shouldn't be much of a problem (we're running Postgres in Kubernetes in production at work). But it may certainly be, that the Docker VM on macOS is limiting the resources too much.

@iwilltry42 sorry I missed your question last week – so far, I've noticed that bulk insert write speeds are far worse when running Postgres on Docker for Desktop when compared to the native Postgres for macOS. In my case, performance is at least 50% slower, even if I configure Docker for Desktop to use all available CPUs, memory, etc.

I doubt the difference would be noticeable for most applications, but in my case, a 2x improvement in write speeds can amount to a multi-day reduction in workload processing time. So it's been important to have a host-operated Postgres instance available during development.

[...] yesterday I struggled setting up a proper Windows VM with nested virtualization to test the changes on Docker for Desktop [...]

If I can be of any help in testing on macOS, please let me know. It looks like networking should be very similar to Docker for Desktop on Windows, but I'd be very happy to test anything you suspect may be different.

cscetbon commented 4 years ago

@cscetbon Trying to invert the burden of proof to keep Thorsten's hands free on the stern of #360: Where would you suggest it couldn't work?

I was trying to get a confirmation but I think that's what you implicitly did 😉

iwilltry42 commented 4 years ago

Here's a test-release of #360 : https://github.com/rancher/k3d/releases/tag/v3.1.0-dev.0 Please give it a try and leave feedback on #360 :+1: