Open OrpheeGT opened 1 year ago
Same issue for me. Gluetun is making my NAS go from 12% CPU to 90%, continuously.
If you're seeding using the userspace implementation, that's just the Wireguard code running (within the gluetun-entrypoint process).
If you want to dig further in what is using all this CPU, try https://github.com/qdm12/gluetun-wiki/blob/main/contributing/profiling.md It's relatively easy to setup and fun to visualize, although note it would only show vpn profiling for Wireguard in userspace (the case for @OrpheeGT at least).
I'll keep the issue opened for a few days, in case one of you wants to post a screenshot of cpu usage. I might even 'steal' it and put in the wiki faq for other users π
Assuming this is Wireguard just going as fast as possible and you want to lower its cpu usage at the cost of reduced bandwidth, you can use cpulimit
on the gluetun-entrypoint process from your host. Maybe you can do it with docker/docker-compose but as far as I know, you could only do it with kubernetes back then.
@qdm12 Let me first say thank you for your great work on this project. Since you asked for more info, here it is.
Per your request for a CPU screenshot:
Here is Grafana showing the compose stack running containers for gluetun and qbittorrent, and then stopped for a comparison:
In this example, I'm using ipvanish VPN and torrents are set to unlimited download speed (was going about 12MB/s) and 30KB/s capped upload. OpenVPN was selected, so I'm not sure Wireguard was even in use??
Here is the compose I used, note openvpn selected:
Switching from gluetun/qbit to image: binhex/arch-qbittorrentvpn reduced CPU from 70-90% to 15-25%.
Please let me know if I did anything incorrectly.
Hello !
Thank you for your help and answer !
So your message helped to understand this notion of "userspace implementation" of Wireguard.
I'm running it on Synology... I understood wireguard kernel module was actually missing. So I found the following docker project : https://hub.docker.com/r/blackvoidclub/synobuild72?ref=blackvoid.club
I built this package, and extracted the wireguard.ko from it.
Loaded it (with insmod) and then (re)started gluetun docker container.
========================================
========================================
=============== gluetun ================
========================================
=========== Made with β€οΈ by ============
======= https://github.com/qdm12 =======
========================================
========================================
Running version latest built on 2023-08-11T11:08:54.752Z (commit e556871)
π§ Need help? https://github.com/qdm12/gluetun/discussions/new
π Bug? https://github.com/qdm12/gluetun/issues/new
β¨ New feature? https://github.com/qdm12/gluetun/issues/new
β Discussion? https://github.com/qdm12/gluetun/discussions/new
π» Email? quentin.mcgaw@gmail.com
π° Help me? https://www.paypal.me/qmcgaw https://github.com/sponsors/qdm12
2023-08-19T15:25:00+02:00 INFO [routing] default route found: interface eth0, gateway 172.18.0.1, assigned IP 172.18.0.2 and family v4
2023-08-19T15:25:00+02:00 INFO [routing] local ethernet link found: eth0
2023-08-19T15:25:00+02:00 INFO [routing] local ipnet found: 172.18.0.0/16
2023-08-19T15:25:01+02:00 INFO [firewall] enabling...
2023-08-19T15:25:01+02:00 INFO [firewall] enabled successfully
2023-08-19T15:25:02+02:00 INFO [storage] merging by most recent 17692 hardcoded servers and 17692 servers read from /gluetun/servers.json
2023-08-19T15:25:02+02:00 INFO Alpine version: 3.18.3
2023-08-19T15:25:02+02:00 INFO OpenVPN 2.5 version: 2.5.8
2023-08-19T15:25:03+02:00 INFO OpenVPN 2.6 version: 2.6.5
2023-08-19T15:25:03+02:00 INFO Unbound version: 1.17.1
2023-08-19T15:25:03+02:00 INFO IPtables version: v1.8.9
2023-08-19T15:25:03+02:00 INFO Settings summary:
βββ VPN settings:
| βββ VPN provider settings:
| | βββ Name: custom
| | βββ Server selection settings:
| | βββ VPN type: wireguard
| | βββ Target IP address: [Retracted]
| | βββ Wireguard selection settings:
| | βββ Endpoint IP address: [Retracted]
| | βββ Endpoint port: [Retracted]
| | βββ Server public key: [Retracted]
| βββ Wireguard settings:
| βββ Private key: ING...Fo=
| βββ Interface addresses:
| | βββ 10.2.0.2/32
| βββ Allowed IPs:
| βββ MTU: 1400
βββ DNS settings:
| βββ Keep existing nameserver(s): no
| βββ DNS server address to use: 127.0.0.1
| βββ DNS over TLS settings:
| βββ Enabled: no
βββ Firewall settings:
| βββ Enabled: yes
βββ Log settings:
| βββ Log level: INFO
βββ Health settings:
| βββ Server listening address: 127.0.0.1:9999
| βββ Target address: quad9.net:443
| βββ Duration to wait after success: 10m0s
| βββ Read header timeout: 100ms
| βββ Read timeout: 500ms
| βββ VPN wait durations:
| βββ Initial duration: 2m0s
| βββ Additional duration: 1m0s
βββ Shadowsocks server settings:
| βββ Enabled: no
βββ HTTP proxy settings:
| βββ Enabled: no
βββ Control server settings:
| βββ Listening address: :8000
| βββ Logging: yes
βββ OS Alpine settings:
| βββ Process UID: 1000
| βββ Process GID: 1000
| βββ Timezone: europe/paris
βββ Public IP settings:
| βββ Fetching: every 12h0m0s
| βββ IP file path: /tmp/gluetun/ip
βββ Version settings:
βββ Enabled: yes
2023-08-19T15:25:03+02:00 INFO [routing] default route found: interface eth0, gateway 172.18.0.1, assigned IP 172.18.0.2 and family v4
2023-08-19T15:25:03+02:00 INFO [routing] adding route for 0.0.0.0/0
2023-08-19T15:25:03+02:00 INFO [firewall] setting allowed subnets...
2023-08-19T15:25:03+02:00 INFO [routing] default route found: interface eth0, gateway 172.18.0.1, assigned IP 172.18.0.2 and family v4
2023-08-19T15:25:03+02:00 INFO [dns] using plaintext DNS at address 1.1.1.1
2023-08-19T15:25:03+02:00 INFO [http server] http server listening on [::]:8000
2023-08-19T15:25:03+02:00 INFO [firewall] allowing VPN connection...
2023-08-19T15:25:03+02:00 INFO [healthcheck] listening on 127.0.0.1:9999
2023-08-19T15:25:03+02:00 INFO [wireguard] Using available kernelspace implementation
2023-08-19T15:25:03+02:00 INFO [wireguard] Connecting to [Retracted]:[Retracted]
2023-08-19T15:25:03+02:00 INFO [wireguard] Wireguard setup is complete. Note Wireguard is a silent protocol and it may or may not work, without giving any error message. Typically i/o timeout errors indicate the Wireguard connection is not working.
2023-08-19T15:25:08+02:00 INFO [healthcheck] healthy!
2023-08-19T15:25:08+02:00 INFO [vpn] You are running on the bleeding edge of latest!
2023-08-19T15:25:08+02:00 INFO [ip getter] Public IP address is [Retracted] (Switzerland, Zurich, ZΓΌrich)
Now I have "[wireguard] Using available kernelspace implementation"
And now no more gluetun high CPU usage, but only qbittorrent ! But it also fixed my biggest issue : https://github.com/soxfor/qbittorrent-natmap/issues/16
Now I'm using the wireguard kernel module, I don't have anymore any network issue inside gluetun while using qBittorrent.
@OrpheeGT wow, this sounds interesting. Good work on this! I'm also on Synology, so your fix would be most appreciated. I have no idea how to do what you did. Do you mind providing your wireguard.ko/insmod files and instructions, please? Tyvm
Hello @Cyph3r As said above, I built Wireguard package for Synology using the docker command from blackvoidclub.
As I'm using broadwellnk CPU achitecture, I did as suggested from the official docker link.
docker run --rm --privileged --env PACKAGE_ARCH=broadwellnk --env DSM_VER=7.2 -v /root/synowirespk72:/result_spk blackvoidclub/synobuild72
It created a SPK package for Synology : WireGuard-broadwellnk-1.0.20220627.spk
But for my own usage, I just opened it with 7zip, searched inside the wireguard.ko module file. Copied it on my NAS.
And I created a planned task with Synology GUI to run the following script as root :
#!/bin/sh
# Create the necessary file structure for /dev/net/tun
if ( [ ! -c /dev/net/tun ] ); then
if ( [ ! -d /dev/net ] ); then
mkdir -m 755 /dev/net
fi
mknod /dev/net/tun c 10 200
fi
# Load the tun module if not already loaded
if ( !(lsmod | grep -q "^tun\s") ); then
insmod /lib/modules/tun.ko
fi
# Load the wireguard module if not already loaded
if ( !(lsmod | grep -q "^wireguard\s") ); then
insmod /var/services/homes/user/wireguard.ko
fi
But you may not need to build the SPK yourself. Just take it from : https://www.blackvoid.club/wireguard-spk-for-your-synology-nas/
Take the one matching your CPU and DSM version.
@OrpheeGT Thanks for the writeup!
Update: Everything is working great after applying the fix and rebooting.
Hello, I am facing the same issue with Ubuntu 22.04 with Gluetun, qBittorrent and ProtonVPN
@OrpheeGT ty so much!
@Cyph3r
Per your request for a CPU screenshot
Thanks for all the screenshots, but that wasn't the request π The request was to (when using Wireguard in userspace - you can also use WIREGUARD_IMPLEMENTATION=userspace
to force it to userspace) run the profiling to see where the CPU usage goes internally within the Gluetun program, as described here).
Switching from gluetun/qbit to image: binhex/arch-qbittorrentvpn reduced CPU from 70-90% to 15-25%.
Maybe because gluetun was using OpenVPN and binhex/arch-qbittorrentvpn was using Wireguard? π€
Had the same high CPU usage issue on my Synology NAS. It starts automatically in userspace mode.
Thanks to @OrpheeGT for the hint with the kernel implementation! Now it works how it should.
But why is the Userspace mode so demanding?
@OrpheeGT Thanks for your suggestion of extracting just the .ko file from spk, I was hesitant of installing the entire package since I love simplicity and this helped me to finally get from userspace to kernelspace wireguard implementation :)
Hello !
@qdm12 is this what you needed ? : profile_cpu_load_userspace.pb.gz profile_cpu_userspace.pb.gz profile_heap_memory_userspace.pb.gz
pprof.gluetun-entrypoint.alloc_objects.alloc_space.inuse_objects.inuse_space.001.pb.gz pprof.gluetun-entrypoint.samples.cpu.001.pb.gz pprof.gluetun-entrypoint.samples.cpu.002.pb.gz
Is this urgent?
No
Host OS
Synology docker
CPU arch
x86_64
VPN service provider
ProtonVPN
What are you using to run the container
docker-compose
What is the version of Gluetun
ghcr.io/qdm12/gluetun:latest
What's the problem π€
Hello, Gluetun containers CPU usage raise when qBittorrent torrent is seeding (500 seeding, 20 real actives, 10MB/s upload)
I'm actually using this docker service configuration : https://github.com/soxfor/qbittorrent-natmap
Share your logs
Share your configuration