Closed evanrich closed 3 years ago
Thanks for opening your first issue here! Be sure to follow the issue template!
fwiw, adding "prividliged: true" to the container, and then adding "sysctl net.ipv4.ip_forward=1" in the container seems to fix this, can this be added to the image?
Hey there, sorry for missing this issue earlier.
I can verify that k8s does seem to have this behavior and your proposed solutions do fix it, specifically because privileged lets you set the sysctl in the container; although testing this on just a vanilla docker instance with no orchestration the value is set to 1 as expected by default.
We'll go back with the team and see if there's a better way to explicitly set the sysctl so you don't have to do either of these things, but since LSIO only officially supports vanilla docker today, there's no turnaround time or guarantee this will get fixed.
For k8s you'll have 2 options, then:
You can add unsafe sysctls to your cluster so you can set net.ipv4.ip_forward=1 in the pod spec
You can keep privileged mode and add sysctl net.ipv4.ip_forward=1
to your /config/wg0.conf
postUp step (which is what I'm now using thanks to your feedback :) )
Whatever the results are, thank you for debugging this issue and coming back with the solution; I'm sure this time capsule will be very valuable to folks!
Thanks, this helped a lot. I also deployed this now in kubernetes, but it also works without priviledged: Add:
allowedUnsafeSysctls:
- 'net.ipv4.ip_forward'
to /var/lib/kubelet/config.yaml
Then you can modify your manifest:
---
apiVersion: v1
kind: Service
metadata:
name: wireguard
labels:
app: wireguard
spec:
selector:
app: wireguard
type: LoadBalancer
loadBalancerIP: 192.168.50.26
ports:
- name: wireguard
port: 51820
protocol: UDP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: wireguard
annotations:
fluxcd.io/automated: 'true'
fluxcd.io/tag.wireguard: 'regex:^v1.0.+-ls[0-9]+$'
spec:
replicas: 1
selector:
matchLabels:
app: wireguard
template:
metadata:
labels:
app: wireguard
spec:
nodeSelector:
kubernetes.io/hostname: "homelab-a"
+ securityContext:
+ sysctls:
+ - name: net.ipv4.ip_forward
+ value: '1'
volumes:
- name: dockerdata
hostPath:
# directory location on host
path: /dockerdata
type: Directory
- name: host
hostPath:
path: /
type: Directory
containers:
- name: wireguard
image: linuxserver/wireguard:v1.0.20200827-ls1
#imagePullPolicy: Always
securityContext:
capabilities:
add:
- NET_ADMIN
- SYS_MODULE
volumeMounts:
- name: dockerdata
subPath: wireguard
mountPath: /config
- name: host
subPath: lib/modules
mountPath: /lib/modules
env:
- name: PUID
value: '1001'
- name: PGID
value: '1001'
- name: TZ
value: 'America/Los_Angeles'
- name: SERVERURL
value: 'wg.server.zzz'
- name: PEERS
value: '3'
- name: PEERDNS
value: '192.168.50.29'
ports:
- name: wireguard
containerPort: 51820
protocol: UDP
With these changes, wireguard works like a charm in k8s.
Happy to have found this issue as I'm having the same problem.
I tried adding
allowedUnsafeSysctls:
- 'net.ipv4.ip_forward'
to the path specified and rebooting the node, however when deploying I get a sysctl forbidden error message.
I've also tried with a privileged container and running sysctl net.ipv4.ip_forward=1
, however, this also does not seem to be working.
I am using a CentOS 8 based node and installed the wireguard components as described in the wireguard install docs using the epel release method, though since the server is starting I suspect that is not the issue.
https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/#enabling-unsafe-sysctls
jvanbruegge posted exactly what to add to the kubelet configs to enable these sysctls, the link here is the official documentation on that step. We do not take any responsibility for any changes you make as we dont' technically support Kubernetes.
"Does not seem to be working" isn't very clear though, can you elaborate?
Perhaps the problem is that I can't run kubectl
on my nodes because they are managed by Rancher/RKE.
I tried creating the config file in the default path of kubectl
, but that did not work either. I doubt this is a problem with the project, though having the ability to edit the default config to add the required section to the wg0.conf file would be nice.
Rancher uses a kube-api like any other and kubectl just calls that API, you can try the steps here to have access back https://rancher.com/docs/rancher/v2.x/en/cluster-admin/cluster-access/kubectl/ ; that being said I'm pretty sure the rancher UI does a decent job exposing the necessary bits to do the same but I'm not as familiar.
I'm closing this issue as we don't support k8s and at least two suitable answers to the original issue now exist.
Modifying the kubelet https://github.com/linuxserver/docker-wireguard/issues/78#issuecomment-751286862 And adding priviliged to the pod https://github.com/linuxserver/docker-wireguard/issues/78#issuecomment-739151416
If any historians reach this place and are looking for more Kubernetes conversations for our containers, at least as of the date of this writing, please join our Discord where we have several members who are just as masochistic as we are :) https://discord.gg/YWrKVTn
I'm running wireguard inside kubernetes, which shouldn't be a problem, since other containers are working fine. I have a feeling perhaps that IPTABLES rules might be screwed up, because when a client connects to the pod, it can't access the internet/internal network, but the pod can.
Here is tcpdump traffic from the pod (container) out to ping google.com:
Here is a tcpdump of the virtual adapter docker creates for the container, showing bidirectional traffic for the ping above:
WG showing client connected:
Here is a tcpdump of the wg0 interface in the container, showing client connected, but only going in 1 direction (10.13.13.2 is the client wireguard ip):
and finally, the kubernetes host showing no traffic coming out when the client tries to connect (homelab a is k8s host, 10.1.211.156 is pod ip):
I'm on the latest image you have (using flux to auto-deploy). Here is what I can surmise:
Mobile device can connect through router to wireguard pod without issue, meaning client can talk to wg0 in container Wireguard pod is able to talk out to world on eth0 It appears wg0 cannot pass traffic to eth0 inside container
I have looked at my firewall, and do not see traffic coming out to network from the wireguard session, nothing in/out of my network is reachable by the client either, and all tcpdump traffic shows only single direction from client into wg0 interface.
If it matters, here is deployment file in k8s:
docker logs:
FWIW, I have a openvpn container that works, and I have also installed wireguard in a freenas Jail as a backup, so wireguard works in my network, and openvpn works in k8s, but wireguard does NOT seem to work in k8s, while other containers work just fine. I'm not an IP Tables expert, so can't debug that.
Thanks!