loft-sh / vcluster

vCluster - Create fully functional virtual Kubernetes clusters - Each vcluster runs inside a namespace of the underlying k8s cluster. It's cheaper than creating separate full-blown clusters and it offers better multi-tenancy and isolation than regular namespaces.
https://www.vcluster.com
Apache License 2.0
6.26k stars 398 forks source link

Support custom coredns config #286

Closed chrisjohnson closed 2 years ago

chrisjohnson commented 2 years ago

k3s recently added support for custom coredns config: https://github.com/k3s-io/k3s/pull/4397

I see 0.5.0 changes to use vcluster-provisioned coredns. Can we copy the same shim for injecting custom server definitions into vcluster's tool? We need this to resolve on-prem domains from our clusters

chrisjohnson commented 2 years ago

I started this PR https://github.com/loft-sh/vcluster/pull/287 but I honestly don't know what I'm doing as far as any sort of testing goes. Would anybody care to ride along with me and show me how to get this change over the line?

matskiv commented 2 years ago

@chrisjohnson Thank you for submitting the PR! It seems like it should work, we can merge it after some testing. The default scenario, where there is no customization, should be covered by our e2e tests. We updated our GitHub actions triggers to run e2e tests on the PRs that change something within manifests/. Could you please rebase your PR to trigger those tests?

As for testing a scenario with the coredns-custom ConfigMap, you will need to get the updated manifest into the syncer image, for which there are at least these two ways: a) Building the image locally and pushing it to a repo that can be accessed from the cluster where vcluster is deployed (basically docker build -t REPO:TAG . + docker push REPO:TAG), and then using the built image in the vcluster's syncer container. b) Follow our guide for local development and changes to manifests/ will be present in the dev container. Once the updated coredns deployment is running, and coredns-custom ConfigMap, I would run some DNS queries to exercise custom rules. I usually drop a dnsutils pod into the vcluster (kubectl apply -f https://k8s.io/examples/admin/dns/dnsutils.yaml - it's from here), and run nslookup via that pod (e.g. kubectl exec -i -t dnsutils -- nslookup kubernetes.default). Then, depending on your custom rules, you should be able to detect if it's being hit (based on the logs in coredns pod or server reply).

Please let us know if you were successful with the testing. :) I am also more than happy to help out with the testing or with the local dev setup, so I'll reply on your Slack thread, and we can chat there if needed.

matskiv commented 2 years ago

@chrisjohnson Can we close this issue if the scope of customization that you implemented in #287 is sufficient for your use case? If not, and you would like to make more extensive customizations of the CoreDNS deployment and config we can discuss that here. :)

chrisjohnson commented 2 years ago

We are working to get a custom vcluster build deployed into our working environment to confirm this solves our use case and then yes I will close this issue out

chrisjohnson commented 2 years ago

All good, it works for us!

apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns-custom
  namespace: kube-system
data:
  # Must end in .server
  example.server: |
    example.org {
      log
      whoami
    }