netbirdio / netbird

Connect your devices into a single secure private WireGuard®-based mesh network with SSO/MFA and simple access controls.
https://netbird.io
BSD 3-Clause "New" or "Revised" License
9.87k stars 428 forks source link

Kubernetes #2065

Open autarchprinceps opened 1 month ago

autarchprinceps commented 1 month ago

The Kubernetes documentation (https://docs.netbird.io/how-to/routing-peers-and-kubernetes) only works in the wrong direction. It allows you to access individual pod IPs from "classical" nodes, but I don't know what use that would be.

If there is a use case for a Kubernetes link, it would be to make onprem nodes available to all Kubernetes Pods. But that does not work with the steps mentioned there. Is there any documentation available on how to make that work?

Beyond that it would be ingress or at least services you'd want to expose to the other nodes in your VPN, and they'd have to be identifiable by name. Pod IPs will constantly change.

mlsmaycon commented 1 month ago

Hello @autarchprinceps, you are right; by itself, the steps allow a limited number of use cases. but with a few additional resources in NetBird, you can enable access to services using Kubernetes domains. See the steps below:

  1. Find out the Kubernetes DNS server address For that let's use the NetBird Pods deployed in the documentation. We can run the following commands:
# list pods in the default namespace and locate one of the NetBird's pods.
kubectl get pods
# get the DNS server from /etc/resolv.conf
kubectl exec -it <NETBIRD_POD> -- cat /etc/resolv.conf

Here you should receive an entry similar to the following:

search default.svc.cluster.local svc.cluster.local cluster.local eu-central-1.compute.internal
nameserver 172.20.0.10
options ndots:5

We will use the nameserver address, 172.20.0.10, to configure a route and nameserver, and eu-central-1.compute.internal as match domain.

First, the network route:

image

That will make the pods routers for the 172.20.0.10/32` address. Now let's add the DNS Nameserver configuration:

image

With this configuration you should be able to access a service in the cluster using its name and listening port like the example below:

curl app1.eu-central-1.compute.internal:8080/users

Let me know if this makes sense for your use case. It is not at the integration level we want it to be, but we will work to improve it.

drtinkerer commented 1 month ago

Updates - Fixed the issue by reinstalling everything from scratch. Below was the original query, routes and yaml manifests that works perfectly as expected.


Ideally, I thought if we add a route to kubernetes internal service CIDR (say 10.96.0.0/16) and make those IPs accessible via other peers. I don't want to associate POD CIDRs route since it would be anti-pattern to use pod IP to access pods.

I have 3 node raspberry pi running kubeadm provisioned cluster.

I am using such setup with Twingate. My apps deployed on kubernetes can be accessed via service ips (the ones we get with kubectl get svc) with twingate.

I want to try it out with netbird as well. I have deployed netbird as peer inside pod with the official doc.

Routing to kubernetes pods or services do not work for me.

these are my routes associated with peer deployed as kubernetes pod.

Screenshot 2024-06-02 at 15 01 53

I am not able to reach pod neither with POD ip nor with service IP. I just get timeout when i do curl from remote peers.

Any updates where should I look into ?

I can see some errors in pod logs

Screenshot 2024-06-02 at 15 19 17

same error/warn happens when I try to add POD CIDR Routes to peer.

2024-06-02T10:27:42Z WARN client/internal/routemanager/client.go:154: the network 10.10.0.0/16 has not been assigned a routing peer as no peers from the list [5TWXFiWzy+a1UwocIZVNL2I6SqkFf/UtAu0c16cNRjg=] are currently connected

The error is cryptic. What does it mean ? Which list the hash is pointing to ?

This is how I am deploying it

apiVersion: apps/v1
kind: Deployment
metadata:
  name: netbird
spec:
  replicas: 1
  selector:
    matchLabels:
      app: netbird
  template:
    metadata:
      labels:
        app: netbird
    spec:
      containers:
        - name: netbird
          image: netbirdio/netbird:latest
          env:
            - name: NB_SETUP_KEY
              valueFrom:
                secretKeyRef:
                  name: netbird-peer-setup-key
                  key: setup-key
            - name: NB_HOSTNAME
              value: "netbird-k8s-router"
            - name: NB_LOG_LEVEL
              value: "info"
          volumeMounts:
            - name: netbird-client
              mountPath: /etc/netbird
          resources:
            requests:
              memory: "128Mi"
              cpu: "500m"
          securityContext:
            privileged: true
            runAsUser: 0
            runAsGroup: 0
            capabilities:
              add:
                - NET_ADMIN
                - NET_RESOURCE
                - SYS_ADMIN
      volumes:
        - name: netbird-client
          emptyDir: {}
drtinkerer commented 1 month ago
curl app1.eu-central-1.compute.internal:8080/users

btw, this works for mw without any DNS routing config as you mentioned.

❯ curl http://argocd-server.argocd.svc.cluster.local:80

<!doctype html><html lang="en"><head><meta charset="UTF-8"><title>Argo CD</title><base href="/"><meta name="viewport" content="width=device-width,initial-scale=1"><link rel="icon" type="image/png" href="assets/favicon/favicon-32x32.png" sizes="32x32"/><link rel="icon" type="image/png" href="assets/favicon/favicon-16x16.png" sizes="16x16"/><link href="assets/fonts.css" rel="stylesheet"><script defer="defer" src="main.4f42a41ac4cec5519d46.js"></script></head><body><noscript><p>Your browser does not support JavaScript. Please enable JavaScript to view the site. Alternatively, Argo CD can be used with the <a href="https://argoproj.github.io/argo-cd/cli_installation/">Argo CD CLI</a>.</p></noscript><div id="app"></div></body><script defer="defer" src="extensions.js"></script></html>%

~

However, you do need to add nameserver for k8s as you mentioned. But no need to add route as I have added route for entire kubernetes service of which the nameserver is already part of.