tailscale / tailscale

The easiest, most secure way to use WireGuard and 2FA.
https://tailscale.com
BSD 3-Clause "New" or "Revised" License
17.55k stars 1.34k forks source link

Expose Kubernetes API Server MagicDNS w/ Egress + SSL #11647

Open saadbahir opened 3 months ago

saadbahir commented 3 months ago

What is the issue?

Hello everyone,

Context

I have the following use case:

Goal:

kubectl exec busybox -- nslookup my-cluster.tailscale-domain.ts.net
nslookup: can't resolve 'my-cluster.tailscale-domain.ts.net'

Problem:

  1. Cluster 1 cannot connect to Cluster 2 using MagicDNS
  2. . I could try to setup tailscale on every node but 1/ it's very cumbersome and unpracticable 2/ I use AWS EC2 which is not suitable for enabling MagicDNS
  3. I tried using Egress on Cluster 1 to expose my Cluster 2 API server proxy MagicDNS but it can only support HTTP => I want to keep the SSL validation on my cluster (for obvious security reasons)

What I have done so far:

I also tried editing coredns configmap following this recommendation to add the magicDNS nameserver

Maybe I misunderstand the premise of the kubernetes operator and my use case is not covered

Has anyone tried to fix this use case? If so, what can I try to resolve it?

Thank you for your help!

Steps to reproduce

No response

Are there any recent changes that introduced the issue?

No response

OS

Linux

OS version

No response

Tailscale version

No response

Other software

No response

Bug report

No response

irbekrm commented 3 months ago

Hi @saadbahir

Thank you for creating the issue and for the use case description.

We actually hadn't thought about cross-cluster access to the kube-apiserver via the cluster MagicDNS, but I think that's a really good use case!

I just tested it and it worked for me.

As a note, since #10499 has not merged yet and also we haven't yet published official nameserver images, this functionality is a bit difficult to test.

The way I tested it was:

We will document these steps better once all of the work for the MagicDNS name resolution in cluster gets merged and we have published the nameserver images.

Keen to hear if the this will work for you and also especially if the last steps (configuring RBAC for proxy tags in cluster 2 and passing kubeconfig to cluster workloads) will make sense for your workflow.

I am not very familiar with cross-cluster ArgoCD- is there a way to pass cluster 2 kubeconfig to it, or does it need to be configured in some other way?

sauyon commented 3 months ago

I believe the easy workaround I found for specifically argocd was just setting this:

tls_client_config:
  server_name: <magicdns name>

Then you can simply use the cluster URL for access.