rancher / k3os

Purpose-built OS for Kubernetes, fully managed by Kubernetes.
https://k3os.io
Apache License 2.0
3.5k stars 397 forks source link

Strange error while starting wireguard vpn via connman-vpn #722

Closed spigell closed 2 years ago

spigell commented 3 years ago

Greetings!

Version (k3OS / kernel)

k3os-7906 [~]$ k3os --version
k3os version v0.20.7-k3s1r0

k3os-7906 [~]$ uname --kernel-release --kernel-version
5.4.0-73-generic  #82 SMP Thu Jun 3 02:29:43 UTC 2021

Architecture

k3os-7906 [~]$ uname --machine
x86_64

Describe the bug When connman-vpn trying to load wireguard vpn config there is error in log. VPN do not up also.

To Reproduce

Expected behavior Plugin starts to work (maybe complaining about bad config)

Actual behavior Error in logs

Jun 21 08:21:43 k3os-7906 daemon.info dbus-daemon[1506]: [system] Activating service name='net.connman.vpn' requested by ':1.23' (uid=0 pid=2987 comm="/usr/sbin/connmand -r -c /etc/connman/ma
in.conf --") (using servicehelper)
Jun 21 08:21:43 k3os-7906 daemon.info dbus-daemon[1506]: [system] Successfully activated service 'net.connman.vpn'
Jun 21 08:21:43 k3os-7906 daemon.info connman-vpnd[2992]: Connection Manager VPN daemon version 1.38
Jun 21 08:21:43 k3os-7906 daemon.err connman-vpnd[2992]: Can't load /usr/lib/connman/plugins-vpn/wireguard.so: Error relocating /usr/lib/connman/plugins-vpn/wireguard.so: __vpn_ipconfig_forea
ch: symbol not found
Jun 21 08:21:43 k3os-7906 daemon.info connman-vpnd[2992]: Adding configuration wg-node
Jun 21 08:21:43 k3os-7906 daemon.warn connman-vpnd[2992]: Config file /var/lib/connman-vpn/wg-node.config does not contain any configuration that can be provisioned!
andrewwebber commented 3 years ago

I dont know much about using connman-vpn directly but if it helps the following approach works for me

k3os:
  modules:
  - wireguard

write_files:
- encoding: ""
  content: |-
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: wireguard-configmap-client
    data:
      SERVERURL: ""
      PUID: "1000"
      PGID: "1000"
      TZ: "Europe/Berlin"
      SERVERPORT: ""
      ALLOWEDIPS: ""
      INTERNAL_SUBNET: ""
    ---
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: wireguard-configmap-client-connection
    data:
      wg0.conf: |
        [Interface]
        Address =
        PrivateKey =
        ListenPort = 51820

        [Peer]
        PublicKey =
        Endpoint =
        AllowedIPs =
        PersistentKeepalive = 25
    ---
    apiVersion: v1
    kind: Pod
    metadata:
      name: wireguard-client
      labels:
        app: wireguard-client
    spec:
      hostNetwork: true
      containers:
      - name: wireguard-client
        image: ghcr.io/linuxserver/wireguard:version-v1.0.20210424
        envFrom:
        - configMapRef:
            name: wireguard-configmap-client
        securityContext:
          capabilities:
            add:
              - NET_ADMIN
              - SYS_MODULE
          privileged: true
        volumeMounts:
          - name: wg-config-client-connection
            mountPath: /config
          - name: host-volumes
            mountPath: /lib/modules
        ports:
        - containerPort: 51820
          protocol: UDP
        resources:
          requests:
            memory: "64Mi"
            cpu: "100m"
          limits:
            memory: "128Mi"
            cpu: "200m"
      volumes:
        - name: wg-config-client-connection
          configMap:
            name: wireguard-configmap-client-connection
            items:
            - key: wg0.conf
              path: wg0.conf
        - name: host-volumes
          hostPath:
            path: /lib/modules
            type: Directory
  owner: root
  path:  /var/lib/rancher/k3s/server/manifests/wireguard.yaml
spigell commented 3 years ago

@andrewwebber thanks for workaround. I end up using write files with run_cmd

write_files:
- encoding: b64
  content: 'W0ludGVyZmFjZV0K...'
  path: /etc/wireguard/kubewg0.conf
run_cmd:
- sudo wg-quick up kubewg0
dweomer commented 2 years ago

glad you got it working @spigell and thank you for sharing @andrewwebber !